Andrew Perlman is the dean of the Suffolk University Law School and the inaugural chair of the...
As Professor of the Practice and Co-Director of the Program on Law & Innovation and the Vanderbilt...
Published: | September 10, 2024 |
Podcast: | Talk Justice, An LSC Podcast |
Category: | Access to Justice , Ethics , Legal Technology |
Dean Andrew Perlman of Suffolk Law School joins host Cat Moon for a discussion of generative artificial intelligence (AI) and legal ethics on Talk Justice. Perlman authored an article in February of this year, “The Legal Ethics of Generative AI,” which describes how lawyers can use generative AI while satisfying their ethical obligations.
Andrew Perlman:
What makes generative AI potentially different is that I believe that this will be the most important technology ever invented for the legal profession. It’s going to have the most significant impact on the way in which lawyers go about their work than anything else.
Announcer:
Equal access to justice is a core American value. In each episode of Talk Justice and An LSC Podcast, we’ll explore ways to expand access to justice and illustrate why it is important to the legal community, business government, and the general public. Talk Justice is sponsored by the Leaders Council of the Legal Services Corporation.
Cat Moon:
Hello and welcome to Talk Justice. I’m Cat Moon, your host for this episode. And as a legal educator, my work explores the intersection of technology and access to justice. As do many of my conversations on this podcast. Today’s guest certainly can speak to this topic. Andrew Perlman is the Dean of Suffolk Law School in Boston, and he is a nationally recognized expert on the future of the legal profession as well as legal ethics. His current research focuses on the ethical use of technology like generative AI in legal practice, including the potential impact of gen AI to increase access to legal help. And these are the topics we’ll explore today. Welcome to Talk Justice Steam Pearlman.
Andrew Perlman:
My pleasure, Cat. Thanks so much for the invitation.
Cat Moon:
It is very exciting to have you here. I just finished reading your forthcoming paper, the Legal Ethics of Generative ai, and I just want to say it’s an incredibly informative piece and I wanted to say thank you for writing it. I’ve shared it with a few folks already and they have said, please thank him for writing this because I observed that the entire profession is really focused on this right now and everybody is seeking guidance. So you are performing a public service by writing this. So thank you.
Andrew Perlman:
Well, thank you Cat. I don’t often get thanked for writing a law review article,
Cat Moon:
So
Andrew Perlman:
That’s a real treat you’ve made my day.
Cat Moon:
Well, you have written a very practical and timely law review article on the note of the legal ethics of generative ai. We can start by talking about the current landscape for official guidance on this topic. So the A recently released formal opinion five 12 on this very topic, and I know at least a dozen of states maybe more have released some kind of formal or guidance. And I’m curious if we can start by talking about the themes across these different pieces of guidance that are coming out and what commonalities we can draw. What is the practical guidance we can get from what we’re seeing so far?
Andrew Perlman:
I think one theme that I have seen in almost all of the opinions I’ve read to date is that the existing rules of professional conduct are doing their job when it comes to generative ai. That is that I think they provide very useful guidance to lawyers as written because I think when there’s a new technology that comes out or some new development in the law related to lawyers, there’s an instinct to think that we need some new rule, we need a new regulatory approach. But I actually think the rules have held up very well. I had the good fortune of being the chief reporter of the ABAs Ethics 2020 Commission about 15 years ago, and at that time we proposed a number of updates to the rules to adapt to changing technology. And we were very deliberate in recommending changes that we thought could hold up well over time as technology evolved and we did not propose changes that were intended to respond to very specific technical developments at that time.
And I think that the commission was wise to have taken that approach. And I think we’ve seen the reason why because you could not have anticipated generative AI 15 years ago. And as we look at the rules through that lens and you look at issues like the duty of confidentiality or the duty to supervise non-lawyer assistance in Rule 5.3 or Rule 1.1, the duty of competence, all of those concepts apply pretty well to generative ai. So I think the big takeaway here is, and I think the ethics opinions reflect this point, is that we just need to look at the existing rules, understand how they play out in this context, and that provides pretty good guidance for lawyers when they’re thinking about what they need to do with regard to generative ai.
Cat Moon:
So that point that the rules that we have exist to guide us through this and we don’t need to be looking outside the rules, that it really is comforting in many ways. First of all, I think that the A changes that were made as you referenced about 15 years ago to update really have stood the test of time. On that note, I want to talk about an aspect about this point in time that is just really intriguing to me because the comment eight to rule 1.1 about the obligation to stay abreast of technology. So that’s been around for a while and I think almost all jurisdictions have adopted some form of it, not quite all, but almost all. And my observation is that the general approach to that has been inconsistent across the practice that folks have either chosen to think about technology or not.
But in this moment with generative ai, I think we’re witnessing a movement to really much more intentionally and actively pay attention to the impact of technology in a way that really hasn’t happened before. I’ve been practicing law for closer to three decades than two at this point. And when I came into the practice, bar associations or legal regulators were just starting to issue opinions on email. And email was still not common across the practice and now it’s ubiquitous, right? It’s just part of how we work and how we operate as lawyers. And clearly there’s not a one-to-one comparison between generative AI and email, but I think we’re seeing this rapid shift, this rapid desire to figure it out and figure out its impact that is different. Which makes me wonder though, because as you point out, if the rules really do give us the tools to address any technology that comes down the pike, I feel like should we have maybe been a little more intentional and active about technology in the practice before generative ai, do you think this is kind of revealing something about lawyers and our approach to technology that’s cultural a little bit deeper in this moment?
Andrew Perlman:
Yeah, there’s a lot to unpack there. For starters, you and I are about the same vintage and started practicing around the same time. And the analogy to email is an interesting one, and I talk about that in the article that you referenced and you point out very astutely that when email first came on the scene, bar associations were quite worried about it and they were worried about it for confidentiality reasons. And there were a number of ethics opinions that warned lawyers against the use of email because of confidentiality concerns. Now you fast forward 25 years and now a number of bars require lawyers to have an email address. So we went from a bar association saying, be careful about the use of this technology to now you can’t even be licensed unless you have an email address. And I actually think generative ai, although a very different technology, is very likely to go through a very similar transformation that is we’re right now worrying about the confidentiality concerns associated with generative ai.
But I think if we fast forward 25 years from now, anyone who is not using these tools, you would have to wonder about their competence. What are you doing? How are you just creating every document by opening up word processor or whatever it will be at that time to a blank screen and starting to type? You just wouldn’t think to do it that way. That’s where I think this is all going. So the analogy to email is an interesting one. Now, coming back to your question about is this a revelation about the profession’s use of technology? It’s a hard one to answer because the common eight to 1.1 talks about that lawyers have to stay abreast of changes in technology including the benefits and risks from its use, and it really is so technology specific and practice area specific and rural specific. It is hard to make a sweeping claim about whether lawyers across the board have been too careful about the adoption of technology.
I think that’s certainly true in some cases. I think there are some areas of practice and some types of tasks where lawyers have been too slow to adapt to technology, but in other circumstances that’s not necessarily the case. So what I think generative ai, what makes generative AI potentially different is that I believe that this will be the most important technology ever invented for the legal profession. It’s going to have the most significant impact on the way in which lawyers go about their work than anything else. And that includes email, electronic research, digital research, e-discovery. All of those are important, but I think generative AI will be the most important of them all. So I think the big takeaway here is that when it comes to the duty of competence, lawyers need to pay attention to generative AI as it evolves in the coming years.
Cat Moon:
So that is an incredibly powerful statement that you make about generative ai. I tend to agree with you, and I view this time in which we find ourselves as really just so incredibly amazing that we have this new technology that is giving us these opportunities. I feel like it’s kind of a once in a lifetime opportunity and there’s a little bit of pressure behind that perhaps. And I think this sense of pressure, and I see it moving from a pressure of fear to more of a pressure of curiosity, but I think this sense of pressure is what is motivating a lot of lawyers to really try to get their heads around how this technology works, how it is relevant to their work, how it is relevant to the needs of clients. And with all that right, comes incredible opportunity, but also a sense of pressure to get this right.
Are we going to get this right as we’re going along? And so the A BA has issued this formal opinion. There are other jurisdictions offering guidance, which is all very important. I’m curious, is there something you would like to see regulators doing with respect to helping the profession, guiding the profession? And recognizing, before you answer, I want to also state this kind of caveat. As you probably know, I’m big on experimentation, right? Innovation is not about perfection. Innovation is about improvement and doing better. So I don’t think that we have this one chance to issue the perfect guidance, but with that said, we do have the opportunity to do better. And so I’m curious what you would recommend, what is one thing you would like to see happen from a regulatory standpoint with respect to practice?
Andrew Perlman:
Yeah, just to come back to your point about pressure, I agree with you. There is a sense of pressure to get this right, to understand these tools, know how they will be used. Going back to your observation about when we started to practice law nearly 30 years ago, I’ve never felt the kind of, you could call it pressure, eagerness, excitement about a new technology during my entire time in the legal profession, as I have felt in this moment. As soon as I learned about this tool and tried it out, my colleague Gabe Tenenbaum told me about a few days after chat GPT was released and I started playing around with it. I immediately realized the revolutionary potential. And ever since then, I have been using the tools regularly. I use it for something almost every day now, and I just am increasingly convinced of how important this will be and we should try to get it right.
And then it applies across the board in the legal profession, in legal education, in society. There are so many dimensions to this issue. On another aspect of the pressure point that I think is worth observing is that as much as there is pressure to want to use these tools and try them out in practice, I am a big fan of the quote by Bill Gates who once said that we tend to overestimate the amount of change in two years and underestimate it in 10. And I suspect that if we got back two years from now, we were talking about the impact of generative ai, we will start to see its impact. We already are starting to see its impact, but it won’t be revolutionary. I tend to think that the truly transformative changes will be beyond two years, it’ll be closer to 10, but they’re coming.
So there is this pressure, but it shouldn’t be an overwhelming pressure or sense of crisis. This is just something that is pretty clear. It’s going to have a major impact, but I don’t think it’s tomorrow. Now, coming back to your question about the regulatory aspects of this, I would say if I were giving advice to a regulator, this it is quick, just coming back to the email example, I think there was a little bit too much caution around the use of email, and I don’t want to see us be too quick to say, you can’t use it, you shouldn’t use it. I think that would be a mistake both for purposes of our clients, for purposes, the health of an organization and especially its future health. I think that would be unwise. So from a regulation perspective, I tend to think that it’s important to understand what’s going on, to try to understand the potential downsides and be humble about what we try to do in this space because you could do more harm than good when regulating a relatively new technology.
Now, of course, there are risks from doing nothing. I don’t want to understate that because there are some serious risks associated with AI in general, in generative AI in particular. But I do think we need to be very careful when it comes to regulation. And so far, coming back to the ethics opinions we were talking about earlier, I think they have gotten it exactly right. They’ve said the existing rules do provide a regulatory structure. They do provide guidance. And I guess that’s the final takeaway on this regulation piece is that when it comes to regulation, understand what rules, laws, regulations already exist because they might already do much of the work that you want them to do when it comes to generative ai. So don’t try to regulate in a one-off way, try to understand what’s already out there before going in a new direction. So that would be my advice.
Cat Moon:
Excellent, excellent. I must share a brief story to your point about perhaps there was overreaction in response to email. So in 1998, in the jurisdiction where I practiced Tennessee, our regulatory body issued an opinion advising lawyers very narrow circumstances under which it would be appropriate to use email. 1998 before the end of that same year, they issued an opinion saying the opposite. So they were quick to come up with an opinion I sense about a technology that those who drafted the opinion didn’t really fully understand and then realized that nationally everybody else was going the other way and reversed themselves. So just we can even in this age of so much happening that hindsight might not be the guidance necessarily needed. We can still learn from the past when it comes to these kinds of things. So I appreciate that point about maybe not moving too quickly.
Well, on the note of moving to provide guidance, I do want to briefly talk about guidance being issued from courts because that is another source of really mandatory guidance, right? If a court is issuing an order stating what is and is not going to be permitted with respect to filings in a given court, clearly we need to be paying attention to that. And you addressed this as well in your forthcoming article. I’m curious if you could also share with us some guidance you would offer to courts as they grapple with how they should be or should not be issuing set orders.
Andrew Perlman:
Well, this is a great illustration of the point I was making earlier that be careful about overregulating when a new technology emerges. And I think some courts have overregulated in this area. There was one court that issued an order essentially banning lawyers from using all ai, not even generative all AI except for limited examples like Westlaw and Lexi, like the online legal research tools. And I think that was a dramatic overreaction, both because word processing is an example of ai. I mean, if you use Microsoft Word, it’s a form of ai, and it seemed to have been covered by the order. So it was way overbroad, but also I believe unnecessary. And this is true of the judicial orders that were more narrow and said that lawyers can’t use generative AI unless they notify the court. I don’t think that’s necessary. And the reason it’s not necessary is that we have existing rules that would prevent lawyers from filing a brief with fictitious citations in it that’s longstanding as a basis for discipline or a violation of Rule 11.
You don’t need a new standing order from a court to say that if you file a brief that has phony information in it, that that’s impermissible. So I don’t think we really need standing orders to address that aspect of generative AI’s use or misuse as the case may be. Now, it may be useful to remind lawyers that these tools make mistakes and that if they file a brief or any document with the court that has fictitious information, citations that they can and will be disciplined for that. That’s all you really need to do. But the idea of having lawyers disclose to the court every time they’ve used a generative AI tool is wholly unnecessary and also potentially problematic as these tools evolve and the lines between generative AI and just traditional professional software, those lines are blurring. If anybody has used Microsoft Word 365 or the more modern version of Outlook or Gmail, it auto complete sentences, right?
You see a few words at the end of a sentence suggested to you about how to complete the thought. That’s generative ai. So if you’re using that and you hit, yeah, okay, enter, and it fills in the three words at the end of a sentence, do you now have to notify the court that you have done that? A number of these standing orders imply that that’s the case. So I tend to think coming back to the original point, it’s important to tread carefully and make sure that the existing rules don’t accomplish what you want them to accomplish in this case, I think very much that they do and that the judicial standing orders for the most part are unnecessary and in many cases unwise.
Cat Moon:
So yes, your observation as to existing rules, once again, adequately addressing the potential downfall pitfalls associated and that we’ve seen reported. So I do have a follow-up question, but also want to make the observation that to your point, that so many of the tools we use and don’t even think about or realize that ai, some form of AI is happening in the background. So again, in this moment, there’s something about this that is causing us to pay particular attention. And I think this goes back to your initial pronouncement that there’s something different about generative AI and it’s going to have a tremendous impact. And so I’m wondering why do you think there is this reaction to this technology in this moment of time by quartz, for example, and by bar associations to come forth and offer additional guidance with respect to this particular technology?
Andrew Perlman:
Well, I think the response is understandable in a lot of ways. It’s understandable because I think many people recognize what I said earlier, which is that this is a transformative technology, that it is going to have a substantial impact. So I think there is attention being paid to it for that reason. And the second reason is that these tools can be easily misused. There’s no question about it. And some of the famous or infamous examples that were in the press of lawyers putting fictitious citations into briefs using generative AI capture the public’s attention and the profession’s attention, and people are responding to that. And so I do understand that. So it’s not like it’s unexpected that courts might want to say something and that BARR associations might want to issue ethics opinions that the instinct is very understandable. And the reality is, especially on the ethics side, lawyers do want, or many of them at least want to try to use these tools or know what the constraints are on the ability to use these tools.
And I think it’s actually a good idea for bar associations to be issuing ethics opinions describing how the existing rules apply to this technology. I think by doing so, it will help lawyers adopt these tools more readily than if there were no ethics opinions at all. And lawyers are just having to guess at whether it’s okay to use them. So I actually think that theBar associations are doing a great service by issuing these ethics opinions and taking the approach that they largely have, which is, here are the guardrails, here is what you can do, here’s what you should be careful about. So providing that guidance is I think exactly what’s needed in this moment in time, in contrast to the judicial orders, which I think are unnecessarily restrictive. So I think the judicial orders have been problematic, but the ethics opinions that I’ve seen have been pretty close to the mark.
Cat Moon:
I absolutely agree with you that lawyers everywhere are really hungry for guidance. And so it is a service for the guidance, connecting it to the ethics rules. And that doesn’t necessarily tell us lawyers what we should be doing practically. How can these tools actually help us do our work better, help us better serve clients? And especially in the context of legal aid, when you have environments that aren’t as necessarily resource rich as perhaps a very well-funded corporate law firm, I witnessed that there is a lot of curiosity and interest in using these tools to really scale in an environment where that’s very, very important. I know you know the data as well as anyone, but the latest LSC survey revealed that 92% of the legal problems experienced by low-income Americans get no meaningful help. I’m calling it the 92% failure rate justice failure rate. So I’m seeing folks across the country in legal aid offices just really curious about how we can be leveraging this technology to serve more people. And I always like to offer some very practical takeaways to listeners. So I’m curious if you could share with us 1, 2, 3 tips you have for boots on the ground lawyers who want to get their minds and hands around this technology, what can and should they be doing in light of our ethical obligations? Of course.
Andrew Perlman:
Yeah. My first piece of advice to lawyers is to use these tools for something. I think there’s this instinct, oh, well, we have to use it for legal related purposes, not necessarily, this is not like Westlaw or Lexus. And that really the only use case is doing legal research. Generative AI can be used for almost anything. And I mentioned earlier, I use it almost daily for some use. In many cases, just personal reasons, searching for something, I want a piece of information, and I might use it professionally, but not for law related reasons, a press release and rewording some language in it or something along those lines. There are so many use cases that I think my one practical piece of advice is use it for something. Use it for in your daily life so that you get a feel for what it can do, and just as importantly, what it can’t do.
And a second piece of advice, don’t give up quickly. When I talk to people, I hear, I tried it, it gave me garbage or it didn’t give me a good answer, or it gave me the wrong answer, or I could have done better. All of those things may be true, but usually people don’t know much about the concept of prompt engineering and how different the output is depending on how you ask the question. And that’s something that lawyers should understand intuitively from learning how to ask questions, whether it’s a deposition or due diligence or depending on how you ask the question and how you tee it up, you’re going to get a different response. The same is true for generative ai. And so whether you’re using it for a law related use case or a personal use case, really try to understand how different the output is, depending on how you ask.
And that there are many different tools out there, and you might get a different response depending on which tool you use as well. So all of that actually has nothing to do with the law necessarily. It’s about how do you become familiar with what these tools are capable of doing. And so as a practical matter, I can’t recommend more strongly that every lawyer out there in every setting just start using the tools. Even the free versions now are getting really, really good. And you get a limited number of uses for the best models right now, and I think they’re just exceptional. So when it starts to get to law related use cases, it gets tricky in a hurry for a variety of reasons, depending on the setting, depending on what constraints your organization has and rules already in place about the use of the tools.
I think there are ways to try out law related use cases without disclosing or using confidential information. You can anonymize information and still get a sense of what it can do. Here’s a great example. Let’s imagine you’re a litigator in a legal aid office or elsewhere. You get a brief from an opponent, it’s public information, it’s filed with the court, take it, upload it to your favorite generative AI tool, ask it to analyze it, tell me the weaknesses. What are some arguments against this position? See what it’s capable doing. You haven’t violated any confidences. It’s not even your document and it’s public, so try it out. And so there are various ways in which you can have it critique knowledge, information, documents so that you can really become familiar with the capabilities of these tools. So those are just a few. They’re obviously more advanced use cases and creating your own documents and your own information.
And increasingly over time, I have found that these tools are getting quite capable and I’ve built a number of my own little projects and custom GPTs and law related matters, and it’s really starting to get pretty impressive. And that’s not even talking about the companies out there that are developing lawyer specific and law specific use cases. So I do think that we’re in the early stages of a pretty massive shift in how we generate information and legal documents. And by the way, just coming back to the access to justice point, we’re all talking right now about how lawyers and legal professionals might use these tools, let alone how the public in a self-help sort of way might use these tools. And I think there’s an enormous potential there. Of course, enormous peril as well, but enormous potential. And when I hear a lot of handwringing about the possibility that the public might use these tools to generate legal documents, we always have to ask the question relative to what you mentioned, that 92% figure relative to that, how is that worse? We need to give help to people in need. And if these tools when appropriately structured and made available to the public can enable folks to help themselves, we absolutely need to consider that possibility.
Cat Moon:
Amen. I would say that from your lips to God or goddess’s ears, that that is the future opportunity. We can see. I agree with you completely. When we see the power of the technology to help us analyze something, the technology can also be used by a person without legal training to help them do the same. And I think that’s an assessment we’re going to have to continue to use, weighing the potential benefit against what is currently, in my view, extreme harm in the current system with so few people getting helped. So I, for one, am a big fan of us exploring and experimenting in that direction. And that brings me to a final point that you make in your forthcoming paper. This is, and you’ve hinted to this earlier in the conversation, this idea that ultimately lawyers are going to have an ethical obligation to use this technology.
And so I think it would behoove all of us to take your advice that you just offered about how you can start now to get familiar with how the technology works. And I agree, your suggestion that you actually jump in and just start using a tool for some purpose that isn’t even legally related so that you’re learning and getting a sense for how the technology works and how it can help you. I suggest to folks all the time, like plan a trip. We all need to take more vacations, so use it to plan a trip. And this leads me to think about a challenge that exists particularly in the legal aid space. And I referenced before, it’s an environment that is just not as resource rich as a law firm pretty much across the board. So I wonder if this obligation to leverage the technology becomes more baked into some ethical obligation or expectation, how might that impact environments that aren’t practice environments that aren’t as resource rich?
And I’m curious because I think we should be thinking about this now while we have more of a chance to shape how our relationship with this technology and how we’re going to use it. And I do believe, I talked to a lot of executive sectors of organizations across the country, and this is really top of mind when they’re thinking about their limited budgets and how they’re going to deploy them to get technology in the hands of their lawyers and staff so that they can better serve clients. So I’m curious if you have any thoughts on that.
Andrew Perlman:
What you’re describing is not a new problem. There’s always been resource constraints on the ability to invest in new technology, in especially legal aid offices and the like. What I think might be a little bit different about this technology is the potential for scaling and how many more people legal aid office can help if these technologies are appropriately developed and deployed. So it’s one thing to invest in a computer or a server or a network in a legal aid office. It’s an expense, but you don’t see the dividends in terms of how many more people can I actually help? But I believe that if you put these tools in the hands of lawyers, again, properly trained with the right tools, they are going to be able to help a lot more people with the same resources. So I do think that the ROI of investing in generative AI and generative AI training may present an opportunity for legal aid offices that’s unlike any other technological investments that may have been available in the past.
So I think that although these problems have persisted for a while in terms of resource constraints, I think the opportunities here are potentially more profound than anything that we have seen in the past. So I’m cautiously optimistic that we will eventually reach the point where it becomes so obvious that that is a worthwhile investment that we’ll see it happen across the board, but I think that we may be some time away from reaching that moment. In the meantime, I hope that those who work in those kinds of settings will at least on their own, discover the potential so that when the time comes to roll it out in a law related setting, they’re ready.
Cat Moon:
Absolutely. Yeah, we can be laying the groundwork now. So when that opportunity presents itself, that’s luck, right? Being prepared for the opportunity when it presents itself. And I have been lucky to have had the opportunity to have this conversation with you, team Perlman, thank you so very much. I’ve learned a lot, and I do really want to encourage folks to go to, I think it’s on SSRN now, the legal ethics of generative AI is the title, and it is just a really helpful guide for folks. So I encourage people to check it out. And I would ask, is there any other direction, specific direction you might want to point folks in if they’re curious about learning more about generative ai?
Andrew Perlman:
There are so many resources now out there on generative ai. I wrote another piece earlier on the implications of generative AI for legal services in society. That might be a good primer on some of what we’ve talked about. So nothing else that I can think of right now other than my thanks to you, Cat, for the invitation. It was a joy to have this conversation with you.
Cat Moon:
Well, it has been an absolute joy for me as well. I am grateful for your time and insight. And thanks to all of you for listening to this episode. Talk Justice is brought to you by the Leaders’ Council of the Legal Services Corporation and Legal Talk Network. If you like what you heard today, please be sure to rate and review the show and subscribe in your favorite podcast app.
Announcer:
Guest speakers views, thoughts and opinions are solely their own and do not necessarily represent the legal services corporation’s views, thoughts, or opinions. The information and guidance discussed in this podcast are provided for informational purposes only, and should not be construed as legal advice. You should not make decisions based on this podcast content without seeking legal or other professional advice.
Notify me when there’s a new episode!
Talk Justice, An LSC Podcast |
In each episode of Talk Justice, An LSC Podcast, we will explore ways to expand access to justice and illustrate why it is important to the legal community, business, government and the general public.