Scheree Gilchrist is Chief Innovation Officer at Legal Aid of North Carolina (LANC). In this strategic role, Gilchrist...
Conor Malloy, a Chicago-Kent College of Law graduate, is the Project Director for the ‘Rentervention’ initiative at the...
Quinten Steenhuis is a senior housing attorney, systems administrator, and developer at Greater Boston Legal Services, where...
As Professor of the Practice and Co-Director of the Program on Law & Innovation and the Vanderbilt...
Published: | February 13, 2024 |
Podcast: | Talk Justice, An LSC Podcast |
Category: | Access to Justice , Legal Technology |
Legal tech experts discuss generative AI and its applications for legal services on the latest episode of Talk Justice. The episode was recorded live at LSC’s recent Innovations in Technology Conference (ITC) in Charlotte, NC. AI was a major topic at the conference.
Cat Moon:
I believe folks are going to take these tools. They’re going to take the electricity of AI and they’re going to figure out how to meet their needs and solve their problems. And that’s going to happen whether anyone in this room does a damn thing or not to help them do this.
Speaker 2:
Equal access to justice is a core American value. In each episode of Talk Justice, an An LSC Podcast, we’ll explore ways to expand access to justice and illustrate why it is important to the legal community, business government, and the general public. Talk Justice is sponsored by the Leaders Council of the Legal Services Corporation.
Cat Moon:
Hey everybody. It looks like everybody is enjoying their food and enjoying the fantastic company of their colleagues. Welcome to Lunch on Friday and this live recording of Talk Justice, the An LSC Podcast. I am the moderator host for this conversation. My name is Cat Moon, I teach at Vanderbilt University and I’m the co-founder and co-director of the Vanderbilt AI Law Lab. And so I guess that title has me designated as the AI person. So we’re going to talk about ai. I have three amazing guests who I think a lot of you have probably already heard from and seen over the course of the past couple of days, and they’re going to join me for a conversation to bring some things together and give us some thoughts and ideas and inspiration for going forward. After everyone leaves ITC, so I’m going to ask the folks who are up here with me to briefly introduce themselves and then we’ll jump into our conversation. So we’ll start far end. Conor, would you like to go first and introduce yourself?
Conor Malloy:
Hi everybody, my name is Conor Malloy. I am not an expert on microphones. I am the Rent Intervention project director at the Law Center for Better Housing.
Scheree Gilchrist:
Good afternoon everyone. My name is Scheree Gilchrist and I am the Chief Innovation Officer at Legal Aid of North Carolina.
Quinten Steenhuis:
Hi everyone, I’m Quinten Steenhuis and I’m the co-director of the Legal Innovation and Technology Lab at Suffolk Law School in Boston. And I also own a small consulting company called Le Legal Consulting.
Cat Moon:
Alright, thank you all for introducing yourselves and without further ado, we’re going to jump into our conversation. I want to remind everyone if you have a question as we’re talking, feel free to pop it into the app. Our goal is to take as many questions from you all before we end today as possible. So we’ll get started. I want to give everyone an opportunity to hear about the work that all three of you are doing. I know that all three of you have talked about that in your own sessions and I also know that you didn’t begin to share everything you could share about your work. So now’s your opportunity. I would love for you to share with us your current experience working with AI and generative AI in your projects and some takeaways and learnings that you might be able to share with this audience. I think this is a room full of people who are all grappling with what is this thing, how can we use it? What does it look like to build and launch a project? And so share with us Connor,
Conor Malloy:
So my show and tell moment. So with rent intervention, especially as we started to wade into AI and helping people access legal knowledge and remedies through a conversational experience, one of the big things, at least as I dip my toes in the water for people, that code, even if you code on a calculator, you put in one plus one, it’s going to equal two. When you’re dealing with an interface that is operating from your language, words start to matter, context matters. And sometimes being patient with it is my big takeaway to allow for error. And sometimes you might need to brute force it and especially I’ve learned this for chat GPT, if you offered to tip it inside the chat, you get better outcomes. So tip your chat bot.
Cat Moon:
I love that.
Scheree Gilchrist:
As I said, I’m with Legal Aid of North Carolina. We’re a statewide LSC grantee. We cover all 100 counties in North Carolina. We have about 400,000 people that contact our helpline every year and we serve about 55,000 people annually. It’s against that backdrop that we decided that we have to take a second look at how we’re doing things with a different set of eyes and determine how we can improve the delivery of services and actually scale our services so that we can serve more people. We realize that the model that we have as large as we are, we have 450 staff members, 250 attorneys that we were never going to be able to provide direct legal services on that one-to-one attorney model that we’ve been using for the last 20 years. So we needed to scale and through the innovation lab we were looking at solutions to allow us to move from one to one to one to 10, one to a hundred, one to a thousand.
So we created an innovation lab to be an incubator for ideas so that we can create solutions to meet that need. And of course once we started exploring ideas, it was around the same time that AI was exploding and everybody’s taken a look at AI and we are looking at it in terms of how we can improve our intake process. We have a centralized intake system that covers the entire state and there’s a lot of frustration around how people interact with legal aid and how quickly they can get service. So that is sort of low hanging fruit for us to use AI to do some of that pre-screening, to filter out people who don’t necessarily need to be in our process because they may not be eligible or it’s not the right sort of help for them and redirect ’em to the best sort of help. So helping people at the outset of the need and then once they get in, helping to speed that process along by doing, automating the way people do the eligibility screenings, maybe even language translation and even transcription services so that it’s easier for somebody to move through or intake process.
Quinten Steenhuis:
One of the things that’s kind of exciting about using generative AI right right now is that you can just open up chat GPT and try something out. It’s really low stakes to experiment and see what works, what doesn’t work. And then you take that, you’ve tried it a few times, you can move it into the open AI playground, that’s their more backend developer interface and you can, it will generate some of the code for you and you can start to test it and build around it and get something that actually does something real in the real world. You don’t have to tell your clients to go to chat GPT first to use it. So it really lowers the stakes and the cost of iterating and testing out new things, which is so exciting about any new technology that’s also so accessible is wonderful. But what we’ve been really trying to focus on in the lab is what are the safe uses, what are the uses that are the most helpful?
And we’ve focused quite a lot on augmentation, improving, making it simpler and easier to do some of the things that we’ve already been working with. Documentation is a core strategy for the Lit Lab and we do a lot of that in the doc assemble platform, which requires a lot of writing code. So what can we do with generative ai? It’s good at writing code, so we’re using it to build first drafts of the code that we can then edit and improve and then keep refining very safe because nothing goes out into the world from that. That’s actually touched by the client without a human looking it over and reviewing it and making sure it’s right. We’re also trying to see how we can improve the experience of the end user with some light touches of generative ai. We know we can do user testing and improve the question flow and think about the best way to ask great questions.
We don’t want to have the generative AI making those every time. So we’re going to take our great questions and we’re going to use them, but maybe there’s a part of that process where our questions get in the way of the user telling their story. That might be at the very beginning of the document automation process. You come into a guided interview and you want to say what’s on your mind, what it is that brought you there. I had this feedback recently on eviction defense tool that I worked on or someone said, I got all the way through your form, took a while. There were a lot of questions I didn’t get to say what I thought was the most important part of my eviction case. I don’t actually think they had a legal case that was maybe why they felt that, but how much better were their feeling of benefit at the beginning?
They kind of were able to learn that you can do that with generative ai. You can say, tell us your whole story. We’re going to see what in there we can use and plug into the right place through the rest of the guided interview flow. We’re not going to ask you that twice or three times. We’ve already gotten your answer. We can skip it and you can have a more natural follow-up conversation too. These are all uses where the user gets to review and approve what they’re saying. Your client isn’t going to be going out and sending into the world what is created by generative ai. You’re just helping guide them through the process in a more natural way and we’re really excited about the opportunities there.
Cat Moon:
So I love that particular example, Quentin, because I’m really spending a lot of time thinking and focusing on how we use AI to give us super human powers and how we keep human users at the center of this work, that it’s not about robots. AI is not about robots. It’s about how we are leveraging technology to help humans get legal help do our work better. And I mentioned do our work better for this reason, I want to share a quick example from my own work. So I teach law students about generative ai. I teach a course called AI and law practice and law students are no different than practicing lawyers in terms of their curiosity mixed with fear, mixed with a little bit of apprehension about this technology and the impact it’s going to have on their future careers. You can, I would say flip through a newspaper or magazine, but we all know you just scroll online now.
You don’t actually get a paper document, but you see headlines that the robots are finally coming to take the jobs of lawyers and our students see these things as well. And I offer this very simple technique. If you are in an organization that really is grappling with what’s the impact of this technology going to be on our work on how we work, a simple two-step process to just start embracing the technology in safe ways. One is to create a way for folks in your organization to start experimenting internally on work that is experimental. So very low stakes, very low risk of anything bad happening if the experiment doesn’t go really well, but an opportunity for people just to dive in, Quentin was giving great examples of things that chatt BT can do literally right off the shelf. And so how you figure out how the technology can be helpful to you is actually to jump into it and start doing some of those things and figuring out what are some very low stakes ways to do that.
And then you’re able, and I’m actually pulling this from another session that I sat in, but then you’re really able to start thinking about the tools from the perspective of your own organization and what the use case is for the work that your organization is doing, the kind of work that you personally spend time during your day doing and how you can leverage the technology. I think getting a comfort level literally on a one-to-one basis is really important. It’s kind of the gateway into figuring out more expansive uses for the technology. So you all from your own experience have these great ideas and I know there are more and we’re going to get to them, but I want to circle around all of the sessions that have been offered over the past couple of days. Every single one I sat in touched on AI, even if that wasn’t the primary topic. And so I’m curious about some gems you might have come across from others in the sessions that you participated in or attended with respect to AI and generative AI and some takeaways you might want to share.
Quinten Steenhuis:
I came away with some really great inspiration and I think I see maybe some four key emerging tasks that came out of what’s come from the conference so far around generative ai. First, Elise Roy, what a great start to the conference to think about inclusive design and bringing everyone along. I think generative AI can help us, but then I was reminded in a session talking about document automation, Levon yi from the Navajo Nation trial court system was talking about. We have to remember the very basics which talked about during the pandemic, people who were trying to get help going to the top of the highest point, they could find around them 800 mile range before they could get to a court, they had to go to the top of the highest peak to be able to get cell reception to be able to call in for their basic hearings.
We have to remember the folks that are at the very beginning of that as well. Another key point that came up again and again is data sets. We want to know if the things that we’re building work, how do we know if they’re good for answering the legal questions of the people that are our community? We probably gather some of those and have both good questions, good answers and figure out, okay, can these tools work? Will they be able to do a good job? Maybe we can fine tune and prompt them so they’re able to do a better job. It goes right into evaluation. That was another key that came up. We can try these things and we can maybe see with a couple tests do they do something, but we need to have some more rigorous way to know they’re going to be reliable.
Repeated again and again, as Conor said, it’s a probability engine, it’s not a calculator, it’s not going to give the exact same result. So we need to do some tests to get there. And the final point, I think this was really powerful and is when I ran into again and again myself, the issues that we’re dealing with are so emotionally charged, so complex that we’re running into the moderation endpoint of tools like open ai. So the things, the facts, the situations that our clients are dealing with are too upsetting for open AI to be able to respond to it. I think Kim Cramer made this point really well that AI has to be able to deliver bad news, has to be able to respond to bad news too. We don’t want people to be restricted from using these tools because their situations have some Flagg that raises a concern from OpenAI. So I think we’re going to have to figure out how to overcome that. Maybe there are ways with some of the other engines, but chat GPTG PT four is the far and away the best right now. We don’t want our folks to be stuck with the lower quality solutions.
Scheree Gilchrist:
For me, it’s the user-centric focus, design thinking and centering our clients at the heart of our work because for all of us at legal aid programs, and I know particularly at Legal Aid of North Carolina, that’s sort of our north star. We start with our clients, we develop solutions for our clients. We think about really what are the needs that our clients have, how are they experience, what’s their experience like contacting us, the service that we receive. So I like that many of our sessions talked about centering our clients using design, the design thinking framework as a model. The other thing that was exciting to me was the idea of creating local closed LLMs that legal services programs can use so that some of the fears around hallucinations and bias and put in confidential information into these large language models can be eliminated. So I like that piece. And then I think it was the session this morning, I don’t remember the name of the speaker one that said it will not be the same, but it will be beautiful. And that’s how I think about ai. It’s the legal industry and how we provide services and what we do. It may not be the same, but it’s still going to be great
Conor Malloy:
And I’m even just learning stuff right here, but Quentin’s not going to do this. His session was super great with Margaret and Mr. Westerman as far as an introductory approach and being very hands-on where you can see different elements of it in action. And I thought you did a great job being in that deliverable because as we look at it as practitioners or even adjacent to the legal field, there’s this subtext of distrust that you didn’t necessarily touch on explicitly in it, but you talk about the beautiful output, it’s this eloquent incompetence that these things are just masters of language and that’s all they know. So if you want to have a pleading to be able to do something but then make that into a haiku form, it’ll do it right. I don’t know, is iambic pentameter your flavor? It’ll do that too, but it doesn’t know what it’s doing underneath it.
And so that’s why I like hearing a little bit of the old and the new where user-centric isn’t just going out the window. We’re not throwing out the baby with the bathwater and development in a lot of these pieces of programming that we see. Or even it was hinted at in a session I did with Keith out of Duke Law, the Rails legal to be able to start taking a more measured approach to this because you probably have a lot of people on the vanguard that are blowing up your LinkedIn feeds and whatever else about all this stuff. But just taking a step back and looking at what we’re doing and hopefully there’s room for programming in that next year as we kind of all settle down a little bit, we regroup and we look at what we’ve done over the last year with these tools.
Cat Moon:
So I hear the human centeredness popping up again and again and I’m reminded that ultimately humans are going to remain at the center of what we decide to do with this technology. And just realizing that and thinking about that helps ground me and not get too overwhelmed with the complexity and all the unknowns. This is actually an opportunity to embrace every single one of the human-centered design mindsets. Embracing ambiguity is one of those mindsets. We are just a wash in a sea of ambiguity right now. I think I’m wired a little bit differently. That really gets me excited instead of scared, but it’s an opportunity exercise, curiosity to experiment all these ultimately really fun things. And I’m also reminded that we’re talking so much about AI as the technology, the hot thing, but Connor, you made a great point in your presentation that AI is, that’s just a piece of your tech stack.
And so just kind of a little Flagg reminder I want to put up there is that all this other stuff doesn’t disappear. Part of the puzzle we’re putting together here is how does this fit together with stuff we’re already doing? Because I imagine there are a lot of you who’ve invested, well, clearly you’ve invested a lot of time, energy and possibly TIG grants into building things in your organizations to serve your clients and to do your work better. So possibly one question on your mind is what about all the stuff we’ve already done? Is this slate going to be wiped clean now that there’s this new technology we’re supposed to use? And I think Connor, again, you gave a great example, is you’re layering in and Quentin as well, and Cherie, what you all are building is going to do the same thing. It is part of a stack.
They’re pieces of the puzzle that you put together. So generative AI has come along to add an additional layer and that layer actually might give a more human touch to so many of the things that we’re doing. There is this opportunity to chat with the bot and feel like someone is listening to you, a place to tell your story. And if we can get the technology right, then users are going to be able to tell the story and the output is actually going to make that usable for them going forward in some kind of context, a court proceeding, a letter to a landlord, that kind of thing. So it actually can be a conduit for these very human things that people need to accomplish when they’re trying to resolve their legal problems. I find that very exciting personally. Alright, now it’s time for the crystal ball question. And normally this question is what do you see happening in 10 years? And I think I threw that out there when we were prepping for this and Quentin’s like try tomorrow. So we’ll start with crystal ball gazing into it. Scheree, 2024, what do you see as the year goes along? Do you have any predictions for us with respect to generative ai?
Scheree Gilchrist:
Before I peer into the crystal ball, I want to follow up on something you just said about how our clients are already interacting with AI and that’s absolutely correct. And so part of the thought process that we have and how we think about using AI is to remember that our clients are already using it. And so we have now an obligation I think to create some education around the proper way to use it so they don’t get themselves into further problems because they don’t understand prompt writing or they don’t understand how generative AI actually works. And so they’re sourcing information or thinking it’s a source of information that they can then create a pro se pleading or something with that may not be as useful or helpful as they think. So I think part of it is we have an obligation as legal services providers to ensure that our client communities are educated and understand these tools and they can use it effectively.
Now looking into the crystal ball, I mean everything’s moving at such a fast pace, who knows what it’s going to look like in a few years. And then on the other side of that, you still have a lot of fear around AI and you hear the horror stories that lawyers get in false cases. And my worry is that that might cause a chilling effect. Some of the regulatory work around AI and how it can be used, there might be a chilling effect on some of the opportunities. So that could be an immediate thing that we see. And because it’s moving so fast and it’s also expensive and a lot of legal aid programs perhaps won’t be able to purchase large language models or take advantage of some of the opportunities that it’ll create a wider gap. So if we don’t figure out a way to adopt it or create partnerships that will allow us to integrate it into our work without expending resources that we don’t have that our clients and our practice areas might get left behind. So that’s some of the things that I think about. And then I think AI has this ability that it might force us to redefine what legal services actually means and perhaps force us to move away from a lawyer only solution and look at other solutions that might be available. So those are some of the things that I think about that might be coming down the pipe.
Conor Malloy:
Thank you Conor. Yeah, I have two sides to my crystal ball. One of ’em is the more pessimistic side and then there’s the optimistic side and I see the overregulation potentially being a big, big problem. Who’s sitting at the table having these conversations? Where are they coming from on who they’re trying to serve or what interests they’re serving and what do they have to lose is always a big question. But aside from that, one of the big things I see opportunities for, and we’re probably going to see a lot of winners and losers in this space doing different things. I have a bias obviously towards the more conversational side of it, but you see where the market’s going on this for chat pt, they have to keep shutting down girlfriends that people keep developing on these things and companions, but it tells you something about what people are yearning for. It’s that companionship people are willing to have this intimate conversation with somebody about things that they’re very vulnerable over and if we can open up that space to allow for this legal coach, and I think that’s where you might see things to kind of go upstream and be preventative about it, where as it learns from you and it talks to you, there’s an app called Replica. I don’t use these things, but there’s this app called replica right share.
Scheree Gilchrist:
Conor, as you can share,
Conor Malloy:
I’m going to delete all my history, but no, as you interact with it, it learns how you talk to it and there’s reciprocity in how it talks to you, right? It’s something I have failings as an attorney is to express empathy with my client’s lot of times. And that empathy might come, maybe I’m better at the beginning of the day, worse off at the end of the day, my blood sugar’s low, I might be short with you.
These things can express appropriate levels of empathy for certain types of situations that you’re experiencing. Where if I’m dealing with somebody in rent intervention that might be dealing with a security deposit issue, unless you’re really attached to it or you’re going to spend it on something important, there’s a neutral sort of approach to that. But as somebody might be experiencing an issue where maybe they got an order of protection against an abuser that got evicted and now they need to be able to assert their rights to their landlord to say, I can still stay here. You might want to be able to take a different approach and have a different conversation with that person in a certain type of tone. And so I can continue to see as things are just going to augment our approaches just as Cat was saying here a moment ago where it will just find gaps on where we are as professionals and just fill in those gaps so long as we’re willing to let it do it.
Scheree Gilchrist:
I just want to follow up on what Conor just said about how empathetic AI models can be. So we have a project with Laroy right now where we’re developing a virtual assistant for our website. We have a lot of dense information. It’s hard for clients to access the information they need. It’s all there, but it’s just hard for them to sort through it to get to what they need. And we were testing it internally and one of the things that we said is, well let’s just see if we ask it a random question, what will it say? And we typed in, we don’t like you are not helpful. And it responded with, oh, I’m so sorry that you find that was your experience, but legal aid of North Carolina does this. And it went on to describe all the wonderful things about legal aid of North Carolina and how you can access help and provided links and all these. And I stood back and was like, oh my gosh, this is exactly what it should be saying and how it should respond to our clients.
Cat Moon:
Yeah, it’s not as defensive
Scheree Gilchrist:
As we
Quinten Steenhuis:
Humans might be. We could all probably use that filter,
Scheree Gilchrist:
Which Quin,
Quinten Steenhuis:
Well there’s still this really big gap right between playing around with chat GBTI did say how easy it was to move to production, but it can still be hard. It takes an engineering team. Saam Harden during his session was talking about how there’s already 1,600 custom GPTs for law if you go to the OpenAI GPT store, but you need to be a paying subscriber to access those $20 a month. And I was talking with Drew Jackson earlier about this project they’re about to launch at People’s Law School where they’re building this great chatbot powered by GPT-4, which sounds like it’ll be really helpful to access all the great knowledge that’s locked up inside their self-help website. That was a big engineering project. I think we need something in between that takes us away from the risks of open, well moderated, maybe overly moderated chat GPT to easy productized things we can pick up and use to build some of these solutions.
We need to feed the chatbot with our own content that’s safe and that’s moderated. So it’s not a big engineering lift. I think we’re going to get there if I’m looking ahead at a year, we’re going to see some great off the shelf products that do that. There’s one that I had a chance to learn a little bit more about at IKL, the international conference in AI law, Joseph Q, which is trying that there’s some of the things that are happening at the GPT store feel like they could be there, they’re behind that paywall right now, but maybe they won’t be forever. We need to be easier for us to build safe vetted solutions with these tools. Something a little bit closer to the experience of using chat GPT and I think we’ll get there soon.
Cat Moon:
So I want to offer something that plays on that Quentin. So my prediction kind of short, medium, long-term is sort of the same in that I believe folks are going to take these tools like they’re going to take the electricity of AI as it’s funneled through the various tools that are on the market and going to come on the market and they’re going to figure out how to meet their needs and solve their problems. And that’s going to happen with respect to legal problems. Whether anyone in this room does a damn thing or not to help them do this, people are just going to go to the tools to the point that 1600 legal chatbots have already been built. I guarantee you that not every single one has been constructed by a lawyer. There are very likely people who had a problem that used chat GPT to solve it maybe, well maybe not well and then they created a GPT and now they’re offering that to other people.
So this is already happening, it’s going to continue to happen. So I think our opportunity is to figure out what role we’re going to play in that process and how we’re going to proceed intentionally and with haste because we also don’t control the timeline in any way, shape, or form absent. Like everything just being shut down and that’s not going to happen. So I think it’s an incredible opportunity and I think the opportunity is just going to continue to grow. And so it’s up to us to figure out how and when we jump on the train that’s moving very quickly and where we want to sit on that train and do we want to conduct the train Because right now we are not in control even if we like to think that we are. So how do we do this? How do we jump on the train?
So I have just another simple suggestion because empathy was mentioned and this is something I do with my students. And so my earlier suggestion was, if you haven’t already played around with these tools, then I urge you to spend 20 bucks for one month and jump into chat T four because it is a completely different experience than 3.5. It’s really night and day. And to understand the power and how the tool works, you really need to have the premium experience one month you can commit to one month and then pretend that you are a person who you serve and sit down with the problem of one of your clients and try to use the tool to solve that problem. Get completely in the mind of the person who you are problem solving for and try to have a beginner’s mind. Try to forget all that you know about the law and procedures and how things work and force yourself into the context.
I’m a person who doesn’t know how any of these things work. I have this problem and this tools in front of me, how would I use it to solve my problem? And two things will happen. One, I think you’re going to learn some things about your client’s problem and their experience in the world. And two, you’re going to learn about the tool. So it kind of has a dual opportunity, learning opportunity there for you. Just a simple step to take. But I think that the tools can be leveraged to help make the process more empathetic in the way that we train and design and use the tools. I think the tools can also be used by us to help us regain some of our empathy as well. It’s hard as a human lawyer, as a human person in these processes to maintain that empathy and also maintain your sanity and not want to run away screaming. And I think that these tools can give us some ways to do better ourselves and I think that that’s an important opportunity. So before I jump to questions in the audience, would any of you like to add anything?
Scheree Gilchrist:
I think I’ll just reiterate partnerships to Cat’s point that creating those partnerships that will allow us to leverage the technology and gain access to the technology is important. So for us as I work, as we’ve developed our innovation lab and grow it, we’re always looking for partners and collaborators to work with us on developing the tools and developing them responsibly. So I wouldn’t encourage anybody who wants to work with Legal Aid of North Carolina to certainly give me a call or email and join us in this work together.
Cat Moon:
Absolutely. We are collaborating with you. Yes you are. My lab is Yay. Alright, so I’m going to just go in order of number of votes, the popularity of questions. So the first question is can you speak on privacy slash data security issues using chat? GBT was self-represented? Litigants?
Quinten Steenhuis:
I can talk about that. So
Cat Moon:
Policies,
Quinten Steenhuis:
Policies of OpenAI have been updated. I think it’s pretty clear now that they’re not using your data for training purposes if you have a subscription and you turn off history. So those are things you can do with chat GBT. If you’ve built a system that uses the backend API that is built in, they’re not using your data for training purposes, it’s just as safe as using OneDrive. Any of the other cloud services that we use every day. It feels a little bit different. Typing intimate things into a thing that talks back to you but it’s not storing in a way that’s not safe. If you’re using one of those two things, the free chat GPT is different. It is going to use your data for training. That’s why it’s free. And that’s something that maybe is a reason again, why we need to build these purpose-built tools that are safer.
Scheree Gilchrist:
And I would say as you’re thinking about should I get an AI policy, I don’t necessarily think we need a specific AI policy within the legal space. We have so many regulations that prevent us from releasing information, client confidentiality, we already have data privacy policies, we have cybersecurity policies. I think it’s just a matter of tweaking your policies. We’re redefining them to include AI and how it should be used, but I don’t think we need to necessarily jump to creating AI policies that restrict the opportunities that come with using the tools.
Cat Moon:
I will just pile on to what Sheri just said to say most, I think 42 states I might be overstating, but most states now have an affirmative duty of technology competence for lawyers as an ethical obligation. And so just to kind of build on your approach tree, I agree that AI is a technology that applies with respect to that ethical obligation and it doesn’t deserve to be singled out any more than any other technology, but it applies. And so we have an ethical obligation and we just need to take that into account and not create a separate policy around it from my perspective. But that assumes that folks understand the technology and are proceeding ethically to figure out how to use it. And ultimately I think it’s comment eight to rule 1.1, competence technology competence. I believe we all have an ethical obligation to do that. I don’t think it’s optional. I think a lot of people treat it as optional, but I don’t think it’s Alright. I’m going to ask a question. I’m going to shift gears a little bit. I think this is a very important area to address. How do we balance UPL concerns against the enormous benefits of leveraging AI to enhance access to legal aid? How should we even think about that? Anybody care to bite into that meaty one?
Quinten Steenhuis:
I think we need UPL reform. I think this is answering right? The cat is whatever it is, the cow’s out of the barn probably
Cat Moon:
Cat’s out of the bag. Cat’s out
Quinten Steenhuis:
Of the bag. There you go. Mixing ’em up. We know our clients are using these. So are we going to overregulate what we can do in a safe way with vetted knowledge bases when the alternative is our clients using the free and less safe version? I hope not. I don’t think that current unauthorized practice of law policy is coherent. What’s the difference between legal advice, legal information? It’s this thing we get to make whatever we want to protect that day and it varies from state to state. That’s the real solution. But I do also think we have to keep in mind that a lot of the things we can do don’t involve giving advice that really smooth and improve the client experience. We can focus on those tasks first and we can try to lobby to change things and maybe there’s some things that we can just say, yeah, that might be kind of at the line. Let’s talk about it first with the court. There’s probably some safe harbor that comes for our free access to justice solutions that maybe isn’t there for commercial ones that are open to the public without a lawyer looking over the results. There’s lots of room for us to play safely in this space. I think
Conor Malloy:
It really comes down to exactly what Quentin was getting into and I saw the UPL definition out of Alaska. That was pretty inspiring where there might be some space for tools like this, but yet that’s precisely it though. What is it that makes us special as a profession and what do we do well? And I think it’s going to require just a lot of inward looking on who we are and what we do and how this can be our companion in doing that kind of work. When it stands on its own, that’s always going to be problematic, especially if you can’t control the outputs. I saw another question on here about hallucinating versus lying and things like that, but if we can have more buy-in institutionally to be able to look at this and the responsible use of it and educate our colleagues, educate the public on the use of it, you’ve got a winning combination there.
Cat Moon:
Okay, well I’d love to chime into on that particular topic, I’m going to move on. I would love, I think we’re going to need to wrap up in just a second, but I want to end with a final query. This gives us each a chance to throw something actionable out there. Alright, so what are some low risk examples for legal aids to jump on the train? Sorry I introduced, but we’ll just start with Conor and move down low risk ways to jump on the train.
Conor Malloy:
This is something I do every day when I’m writing an email to somebody trying to find, running my response to somebody through an email and be like, was I passive aggressive in this email? Was I snarky? And just getting some legitimate feedback to be reform the message that I’m about to send out. That is the number one reason I use it day in, day out.
Scheree Gilchrist:
And I would say it’s using it to simplify how we write. You can take information that you’ve written without disclosing your client information and you can ask chat GPT or any of these models to just simplify it, write it at a fourth grade level for you. And so I think that’s a low hanging, low risk way to use it.
Quinten Steenhuis:
Yeah, I was going to say the exact same thing as Cherri, but I’ll add on another role that it can play, which is really similar, right? Take our stuff that we’ve written, make it better. I use it as an editor all the time. I give it my first draft. I say what’s missing? How could I improve it? What questions would people ask? And it is usually really helpful for that. It doesn’t get bored of me asking it five times in a row like my wife does, so it’s great.
Scheree Gilchrist:
And translation, I forgot to add translation. That’s another way you can use it.
Cat Moon:
Yeah, I mean those are all just great examples of just pulling it up, 10,000 foot overview. Just look at your own daily work and figure out places that you can plug a tool like this in to do exactly the kind of things that they are describing. Doing that effectively, matching your workflow to where the tool can really help first requires you to get a little bit of experience with the tool. So I say you get on the train by giving everyone access to GPT-4 and telling them to go be curious and experiment and figure things out with this tool that’s going to make your work better. And you’re going to have a much better idea of how this technology can serve you and your organization.
Quinten Steenhuis:
I don’t think we said there’s also Microsoft Copilot is here, so use it to make your slides for your next presentation. You have the subscription, it can do this. You can give it a document and say make a presentation from this document. I haven’t tried it yet, but I’ve seen some pretty cool demos.
Cat Moon:
I question the aesthetics of the output from that based on what I’ve seen so far. But it is a good first draft. Yes. Yes. Yeah. Alright, well I think they’re going to kick us off the stage any moment. Thank you all so much. You have been a fantastic audience. Thank you for all the questions and have a great afternoon guys. Thank you.
Speaker 2:
Podcast. Guest speakers views, thoughts and opinions are solely their own and do not necessarily represent the legal services corporation’s views, thoughts, or opinions. The information and guidance discussed in this podcast are provided for informational purposes only and should not be construed as legal advice. You should not make decisions based on this podcast content without seeking legal or other professional advice.
Notify me when there’s a new episode!
Talk Justice, An LSC Podcast |
In each episode of Talk Justice, An LSC Podcast, we will explore ways to expand access to justice and illustrate why it is important to the legal community, business, government and the general public.