Sam Harden is a lawyer and technologist, as well as a believer that technology can help bridge...
Shellie Reid is the Manager of the Legal Services National Technology Assistance Project (LSNTAP). During law school,...
As director of innovation design for the Program on Law and Innovation, Caitlin “Cat” Moon designs the...
Published: | December 12, 2023 |
Podcast: | Talk Justice, An LSC Podcast |
Category: | Access to Justice , Legal Technology |
Experts discuss building tools, running experiments and sharing information to help legal professionals better understand uses for generative artificial intelligence (AI) in the legal field on this episode of Talk Justice. Host Cat Moon is joined by Sam Harden, program manager at Pro Bono Net, and Shellie Reid, manager of Legal Services National Technology Assistance Project.
[Music]
Female: Trying to help people understand what things are capable of so that they can use the tools, because this is a great opportunity coming up for the world to help the access to justice and access to legal services space. If we can harness these tools for good.
Male: Equal access to justice is a core American value. In each episode of Talk Justice, An LSC Podcast, we will explore ways to expand access to justice and illustrate why it is important to the legal community, business, government, and the general public. Talk Justice is sponsored by the Leaders Council of the Legal Services Corporation.
Cat Moon: Hello and welcome to Talk Justice. This is Cat Moon, your host for this episode. I’m a lawyer and legal educator, and it’s part of my mission as a co-host of this podcast to engage in conversations with people who are thinking in new and different ways about how we can leverage technology to close the enormous civil justice gap we currently face in the US. In this conversation, I’m joined by two dynamic and very creative legal technologists, Shellie Reid and Sam Harden. Shellie is the program manager for Legal Services National Technology Assistance Project, better known as LSNTAP, and Sam is a program manager for Pro Bono Net. Both Shellie and Sam are very busy these days working on ways to help lawyers who are interested in exploring the intersection of generative AI, lawyering and access to justice. They’re creating some very useful resources for anyone who’s interested in this intersection, and I invited them on to talk about these things. So let’s get to this conversation.
[Music]
Shellie and Sam, welcome to Talk Justice. I’m so excited to have you both here for this conversation today, and we are just going to jump right into it. I want to kick off with you, Shellie. You have taken on a project which I’m very excited about, trying to kind of marshal and organize all the various pieces of guidance and court opinions surrounding the use of AI in law practice. I think that’s a good loose description for it. We’ve got the ABA Task Force on law and AI, and then the California Bar and the Florida Bar are issuing some guidance and a lot of other things happening. I would love for you to talk with us about your idea for this database and kind of the why and the what of what you’re putting together.
Shellie Reid: Sure. So many of our resources, what I call crowdsourced resources, have come about because the community has let us know that there’s a need for them. So this really started as a need for myself because I was doing a presentation on artificial intelligence, and I wanted to track down all of the cases that have been decided on artificial intelligence and then other conversations led the database to expand. So currently, we’re adding court cases, attorney guidance, judicial opinions, state bar ruminations, any guidance that we find that’s related to AI, current cases, anything that we can put that would be of use to attorneys in the field that need to know what’s going on. So that’s our whole purpose in creating this database. And we use Airtable so that we can embed it in our website, and then it’s sortable and filterable by whatever fields are practical for the person that’s using it.
Cat Moon: So that’s brilliant. And you mentioned crowdsourcing. So is there an opportunity for folks to contribute new sources if they come across something that’s not yet in the database?
Shellie Reid: Absolutely. We have a form linked right there on our website, and people can submit, I believe it’s up to three questions right now. So they can just submit the URL. They can submit if they have comments or questions right there, but none of the fields are required, so we always verify the information that is submitted. So you can submit as little or as much information as you would like.
Cat Moon: And I’m curious, have you gotten any responses from folks who’ve come across this database yet? Have you gotten any submissions?
Shellie Reid: We have. So we have a ListServe. We’re really a google group that goes out to the nonprofit legal aid community. And anytime that I post about our crowdsourced resources, I get a flurry of responses. So we certainly have received responses to this AI database. And as I said, we do go through and approve these items that are submitted so that we can verify the information before it goes up on our website.
(00:05:04)
So, yes. We are up to, I think, this morning I was up in the 60 items listed in the database currently.
Cat Moon: That’s amazing. So I love and appreciate so much that a resource that you, I think, have created primarily for your targeted user group, but you’ve made it available to all, and I have already shared it far and wide. I think it’s an incredible resource, and I will say, actually, one we were thinking about creating at the Vanderbilt AI Law Lab, but now we don’t have to. So, thank you, Shellie. We can move on to another project. And clearly, it sounds like with all the responses you received in the ListServe to this database, that this is a top of mind topic for your particular audience.
Shellie Reid: Absolutely. Because our audience is, in many respects, technology minded, our mission is to provide training and resources for those who are assisting legal aid organizations with their technology. We do have that corner of the market, so to speak. So we do see people who are looking for what’s new and wanting to stay ahead of technology. So that’s really fun for me and for our office because it lets us experiment and try new things, and we know that the community will be supportive of those things.
Cat Moon: I love those words, experiment and try new things. How awesome that you get to do that in your work and model it for other people. I have a question, Shellie, because you are crowdsourcing resources and helping folks in your work across a range of technologies, right? You’re not just focused on AI, but I’m curious if you note any particular sense of urgency or kind of any difference in how people are reacting to, expressing curiosity about perhaps expressing fear around AI and generative AI specifically.
Shellie Reid: Absolutely. There are knee jerk reactions to any new technology, it seems like, in the legal field. So that carries over into the legal aid and nonprofit field as well. Many attorneys are afraid to try new things for fear of making mistakes. And then also we see state bar associations and even the national bar associations putting out guidance that also reinforces those fears in some respects. So certainly, we try to provide resources that can help dispel those fears, that can educate and give the resources that attorneys need to make decisions and allow them to move forward with technology, if that’s their decision, or give them a basis for not using a technology.
Cat Moon: That is such incredibly critical work, I think Shellie, now more than ever. And so, kudos to you and your team. And the point you just made is a fantastic pivot to Sam. Welcome, Sam.
Sam Harden: Hi. Thank you.
Cat Moon: The spotlight is on you now. So included in this database that Shellie and her team have crowdsourced and put together are two pieces of state bar association guidance, one from the California bar and one from the Florida bar. And you recently published a piece on your Substack. You published a piece about these two pieces of guidance. And I would love to dig into your thoughts on this a little bit around those two pieces and how they’re different because these two states, these two jurisdictions are kind of leading the charge with respect to official guidance, and we need some. Is this the guidance we need?
Sam Harden: Yeah. So when I saw that California and Florida both published this, I was really curious to see what angles they were coming from. Full disclaimer, I’m licensed attorney in Florida, so I’m always, you know, like, trying to see what Florida is going to do regulation wise. California’s proposal, when I read through it, struck me as very smart because it comes at it from a practical guidance point of view. They take what the MIT task force has done, and MIT several months ago created a task force on issuing practical guidance for lawyers using generative AI.
(00:09:59)
And up until that point, there wasn’t really any practical guidance. There was a lot of speculation online, a lot of YouTube videos where people said things like the top five prompts for lawyers using ChatGPT, not super useful in actually evaluating a tool. And so, MIT came up with this practical guidance scenario, and California kind of took that and gave it a little bit more weight in what they did, which I thought was really nice.
Another thing that California did that I liked is they reiterated that even though this is a new technology, they didn’t think that it required issuing new rules of professional responsibility. And that’s a point that I’ve seen people make with the Avianca case in New York. Other cases where lawyers have gotten burned for using ChatGPT hallucinations, that kind of thing is novel in terms of it being a novel technology. But that behavior, that negligence falls under the existing rules, as we already have them. And I really like that California saw that and said that in their proposal.
The biggest thing to me that California did, and what I find very interesting is they had a line in there about encouraging the legislature in California to work with the bar and the board of trustees to see if they need a better definition of what is and is not the practice of know. And I’ll come back to that in just a second, but Florida, you know, also issued their recommendations. And what I found interesting about Florida’s is they were not as practical as California’s. And they also talked, you know, kind of they touched on the practice of law, but they didn’t explicitly call for reevaluating the practice of law. They spent a long time talking about whether or not a lawyer can use a ChatBot on a website, whether or not that is feasible or things like that.
But they mentioned and didn’t really get to the heart of the issue that a ChatBot could, in their estimation, be practicing law, and a lawyer has to supervise that ChatBot as if they’re, know, a paralegal or a legal assistant. And I found that a little concerning, honestly, because the implication is there that Florida kind of thinks that an AI could practice law, could give legal advice without a license. And I think it would be very bad if we had a scenario where of all the 50 states, you have different regulations for this kind of technology that the public is using, saying, well, in Massachusetts it’s practicing law, in Florida it’s practicing law. In Texas and California, it’s not practicing law no matter what it says because it has a disclaimer. And that kind of uneven regulatory structure would, frankly be a big problem for these technology companies and for widespread adoption of this technology because it’s hard to change behavior state to state in this kind of technology, and it’s hard to predict what states are going to do.
So I really like what California did and proactively seeing, “Hey, we’re going to need some sort of new definition of the practice of law.” The practice of law is very amorphous. There are very clear examples of what is and is not the practice of law. But then when you get down to the edge cases where the court actually looks at it, the courts have said over and over, it’s so we know it when we see it kind of test. We can’t give you a really big, good bright line definition. We’re just going to have to know it when we see it. And generative AI is generative. You can’t really predict with 100% accuracy what it’s going to say every time you use it, because it’s generative. It’s creating that text. So that’s why I thought California’s approach was really thoughtful.
Cat Moon: So serving much more as kind of a model for other jurisdictions to consider, perhaps in your opinion, than what Florida has issued so far. Would that be fair?
Sam Harden: Yeah.
Cat Moon: Look to California, not Florida. Okay. Well,–
Sam Harden: Just to, — sorry. Just to be clear, I’m not saying that Florida is you know, guidance is terrible or anything like. I just think what California has done has been really smart.
(00:15:00)
Cat Moon: It’s more practical, and that was the takeaway when I read your piece as well, is that it was just more practical and helpful guidance. Well, thank you for that overview and pointing out the distinctions between the two approaches in those jurisdictions. You go right to the heart of an issue that has been relevant with respect to our systems of justice and especially the intersection of technology for a while now. But I think generative AI specifically is really just turning up the heat on the fact that we are dealing with systems and with regulations designed for the second industrial revolution, which ended more than 100 years ago. We now are in the fourth industrial revolution, and perhaps we’re even moving into the fifth just based on what’s happening with respect to technology. But I think this tension between technology existing cohesively and consistently across multiple jurisdictions that might have different approaches like this has been a growing tension. And I think this is really, again, putting heat and pressure on how this can actually work in a way that makes things better instead of making things worse.
Shellie, I’m curious what your thoughts are with respect to how we do go about providing helpful, practical guidance to attorneys and specifically those who are serving populations that are probably more likely to avail themselves of these tools, and maybe only these tools in a Google search because they don’t have access to legal representation like we know most people don’t, especially people with limited means. So we have these powerful tools evolving. What are your thoughts on some practical guidance, what that looks like, and especially since that’s kind of your area of expertise with respect to the technology?
Shellie Reid: So I think practical guidance is we know that people, self-represented litigants, for example, are going to use technology whether the courts are ready for it or not. So I think practical guidance for legal aid organizations to prepare and to know about this is to understand what can self-represented litigants who maybe have come to a legal aid organization and been turned down, or what can legal aid organizations recommend the use of these products. So we’ve been asked to create policies on the use of AI, and right now we’re holding back. We’re trying to find a good group to gather, a group of organizations to develop guidance versus a policy, because with the speed that technology is changing in this AI area, policies would be outdated before we could even issue them. So the guidance is what we’re looking at and using the same guidance that we use for any other technology, understand how it works, understand the capabilities, understand the problems. We know that the training materials are biased going into these systems, so the results are going to have bias. So for legal aid organizations wanting to get a handle on this and how they can use things, I think we just need to be practical, help the legal aid organizations understand that the problems that are inherent in these systems, they have the hallucination problem, they have problems putting personally identifiable information in them. I guess really just understanding the technology, use the resources that we have available and that other organizations are putting out. I think the California guidance was really helpful. I found it interesting that Florida just said you don’t want an overly welcoming gen AI ChatBot. So just to have them be somewhat friendly, not overly welcoming. So it’s just trying to help people understand what things are capable of so that they can use the tools, because this is a great opportunity, I think, coming up for the world to help the access to justice and access to legal services space, if we can harness these tools for good.
Cat Moon: Yeah, I think the opportunity to be proactive in a way that frankly, our profession has not been known for is going to be really critical right now. And I just will say I agree as someone who is teaching law students how to approach generative AI with respect to doing their work with legal practice, absolutely helping people understand how the technology works, what the risks associated with it are.
(00:20:05)
And to that point, to not also be shut down by fear or concern with respect to the risks, right? I think we’ve got to recalibrate our relationship with risk somewhat, obviously in an ethical way. Ethical issues are kind of the thread that’s pulled through all of these considerations. I think what you’re describing in terms of clients, people, users, people who have legal problems availing themselves of these tools.
Sam, let’s switch back to you because you’ve been running some interesting experiments in this regard. So using these tools to answer some questions and then getting people with legal knowledge to rate the responses and kind of dig into what kind of information are these tools providing to people who might be asking for help with them. Can you talk a little bit about these experiments you’ve been running and what you are learning from activity?
Sam Harden: So like Shellie said, there’s a big need for understanding and education of what these tools actually do. And in my work, I work full time for Pro Bono Net, so I work with technology systems that unrepresented people use or that legal aid organizations use to help people solve their legal needs. And so, I’m always thinking about how are people out there without a lawyer actually looking for legal information and advice? What are they doing? What are they asking? Are the things that they’re encountering helpful? And Pro Bono Net, we see usage across a number of systems. Across many states, legal aid organizations communicate with each other. But as far as people without help using technology, unless it’s a system that Pro Bono Net makes or that a legal aid organization runs, we’re not really seeing what people are doing. We’re just kind of guessing educated guesses, but still guessing on are they getting help, are they getting the information that they need?
So as background, I really think that this new technology, considering how popular it has become and so quickly popular as well, people are going to be using it for solving legal problems. We already know that people, when they have a legal problem, almost by default, their first step is to go on Google or go on the Internet and Google something and get information, get help, see what they need to do. Google is kind of like the default triage system for people that need legal help. So with generative AI and with Google and Microsoft, with Bing, both pivoting toward employing generative AI in their search capabilities. I think we need to really think about how are people going to be using this new kind of technology to get legal help and legal advice or legal information.
So thinking about that, and that was a very long intro, I know, but I essentially ran two experiments. My first experiment was taking a public data set, and it’s anonymized, it’s the questions that users of the Reddit legal advice, Reddit posted asking for legal help or legal information and feeding that into the popular chat models. Bard which is Google’s, Claude which is made by anthropic. Two GPT models from OpenAI, GPT-3.5 and GPT-4, and then the Llama 2 model, which is Meta’s conversational model and recording the responses to those questions. I used those anonymized questions because they have a lot of good signaling information, for lack of a better term, that they’re about a legal problem. It’s somebody asking what are my rights after I’ve been arrested? Things like that. And they are very wordy because the person is explaining what’s going on. And so it’s a lot for the model to work with instead of I’m being evicted, what do I do? That kind of thing. So I recorded the responses in Airtable and then took those responses and created a survey to ask people, is this response that the model gave legal advice or is it legal information? And I didn’t really give them much more information than that other than it was buy an AI model.
(00:25:00)
It’s the response to a question, just tell me what do you think it is? And it was really interesting to see what people chose as legal advice versus legal information. The majority of responses people rated as legal information, but there was a sizable percentage that people said was legal advice. After I did that first experiment, I took the more complete information, so the question itself and then the full answer, and created an online survey where I asked legal professionals to go in and rate each response based on a number of factors. So things like how helpful is the response on a scale of one to five, ask them, did it provide actionable next steps for the person to take? Did it hallucinate? And people were able to say yes, no, or I don’t know if it hallucinated or not because hallucination is a big deal in evaluating these kinds of models. And then did it give legal advice or was it giving legal information and whether or not it provided a disclaimer.
And interestingly, the models all performed fairly the same. There were some standouts. GPT-3.5, strangely, was rated higher than GPT-4. On this I don’t know if that was a function of when I ran this through those models. Updates happen continuously, so this stuff is hard to fix at a point in time. People really kind of gave indications, based on their survey responses of what they thought a good answer was. People rated answers highly if they gave next steps, if they had an adequate disclaimer. And people found that responses with an adequate disclaimer were not legal advice, but were legal information most of the time. If it didn’t have a disclaimer, no matter what the answer said, they said generally it was legal advice. And I found that very interesting.
Cat Moon: There’s so much that’s very interesting about this, Sam. Actually, I’m really fascinated by the experiment you’ve run. And one thing, when I was reading through your latest Substack about the second piece of the experiment, I think it was in this post. I’m going to quote you now because this really stuck with me. I think these free or almost free commercial models are going to become something of a legal triage system for people who don’t have a lawyer, if they haven’t already. And I think if you care about access to justice, you should at least be thinking about what people are searching for and doing out there in the real world. And personally, I agree very strongly, vehemently with this statement. And I’m curious if we accept this as a truth, as a fact. And I think it is to your point, Sam. Folks are already using Google and Internet search engines to get information because that’s all they have access to. And these tools, for many reasons at least, impart a sense of even greater access. I think the chat interface, there are a whole lot of things about them that make them appealing to folks who are looking for help. Knowing this, what do we do about this? And I think this is really an opportunity to go back to some points that Shellie, you were making. I think this is really an opportunity for folks in the legal aid community, for folks who are serving otherwise unserved or underserved people to think proactively about how to approach these tools and how they’re being deployed and used by people looking for help. What do you think some first steps are to really be proactive?
Shellie Reid: Well, I personally think that the legal aid organizations need to harness this technology. We’re working with an organization who at least giving word service to the hope to provide a benevolent arm of their business and provide a closed system chat bot for legal aid self-help websites. So we’re experimenting with that. Let’s see if what they can produce will provide the kind of benefit to the community at large. I think we have to get the legal aid field to embrace the technology and harness these things to allow for advances in how we help people.
(00:30:04)
We all know that people are going to Google for assistance and many of them may not even realize that they have a legal problem until they see the results of the searches. So by understanding this, I had an attorney, we had a conversation the other day, and I said that I had gone to Google to look for a case, and she looked at me kind of funny, like, is that okay? And I just laugh. Well, yes, of course it’s okay. I’m obviously not going to use that as the final source of information, but just to go do a Google search is absolutely okay to find a case, and then that’s a quick way to do it if you’re not looking for a brief, a legal brief or anything, to get a quick summary of a case. Sure, why not? So it just is funny that we have to get legal aid organizations to understand that people are doing this and what are they finding when they do this and use those things that are learned in creating the self-help website so that they’re usable for the people that they’re designed for. I don’t know if I answered your question, but those are definitely some thoughts that run through my mind on almost daily basis.
Cat Moon: No, you absolutely did. There’s a whole lot to dig into there, but I’m going to shift over to Sam and ask you to chime. Sam, you know, you’ve run these experiments. What do you want to see happen next? What do we do?
Sam Harden: Yeah, so I think we need know think about this as a legal technology and legal aid community as a whole. The issue, like Shellie said, with people using Google as a triage system, we know that they’re doing that. We also know that people are using ChatGPT as a quasi-legal assistant. There was a news story in New York about a person using ChatGPT to write a letter to their landlord when their landlord had done something bad in the building and had also raised the rent, and this person wanted to fight back. And when ChatGBT created a letter for this person, she went back to ChatGBT and said, no, you need to insert more legalese into this letter, make it sound more like it’s from a lawyer. And it did. She sent the letter to her landlord and lo and behold, the landlord fixed the problems and didn’t raise the rent, and the landlord didn’t come out and say, well, I got this letter that sounded really legal and that made me decide to not do these things. But we can kind of assume that.
So we know that people are using it to solve their legal problems, not just get legal information, but to actually take affirmative steps. One of the things that I worry about is with ChatGPT and these really powerful models, how many people are going to default to know, say, google puts generative AI, that’s very powerful in search. How many people are going to stay in search in that generative AI experience to help them versus using it to get to another website and then to talk to a separate chat bot. In user experience, when we look at technology problems, there’s always a huge drop off rate from the first to third or fourth page, where you have like 100 people are on the first page, but only six or seven of those actually make it to the fourth page. And I think that’s true across any technology product that you think of.
So I think we need to think about how are people going to be using these public tools? And yes, creating like an ethically sourced chat bot with the right information is very important because people will be using that too. But to me, the bigger target is to give guidance to companies like Microsoft and Google on what they should and should not be considering when people are asking their tools for legal help. In the experiment that I run, one of the interesting things that I found is that while a lot of the answers were very similar, some models, if the question included a mention about self-harm, like not someone asking how do I take affirmative steps to do this? But this happened in the past, the model would outright say, I can’t help you with that, and not say anything else just because of, and I’m assuming here, but because of the mention of self-harm. And so how do models need to be governed.
(00:35:02)
Not just on a state by state, state bar level with lawyers looking at it, but what guidance can we as a community give to the technology companies to say, this is what’s helpful, this is what people need, this is the information that they can take action on, and these are the resources that your model should be returning. This is the kind of things that it should be saying in general, and give them that guidance so that the people using those products can actually get the assistance that they need.
Cat Moon: And you hit on a very important reason why lawyers need to understand how this technology works and how it can be best deployed to help people help themselves. Because, Shellie to your point, the practical first step is understanding how the technology is working and how it’s relevant to our work as lawyers. But Sam, to your point, we have really this greater opportunity, and I would go so far to say obligation as leaders in law to do this very important work. I’m curious, Sam, do you think we just send an email to Microsoft and Anthropic and volunteer to help them do this important work? How do you think something like that takes shape?
Sam Harden: I don’t have a good answer to that. I know that at Stanford, Margaret Hagen has done a lot of work in reaching out to the technology companies and trying to get them to sit at the same table. A lot of this stuff because it’s being developed so fast, I think there’s a lot of not bad secrecy, but secrecy in these companies not wanting to reveal how the secret sauce is made. They have competition. It’s a big business. There’s a lot of money at stake. But I think, you know, reaching out as a community, LSNTAP is an important voice in the community and can really help organize people. The same with LSC, the same with the ABA and think about, not just how do we talk to lawyers about how to use this technology, but what is the message that we send to the technology companies on how is this affecting people in the real world? What are the potential downsides? What are the upsides to this technology? Because at the end of the day, my goal is to get information in people’s hands and let them be able to use that information. Because people aren’t hiring lawyers. As you said at the beginning, people need information, they need help. And in my book, if they’re able to use an AI tool to get good help, that’s a win.
Cat Moon: Absolutely. Yeah. My sense is actually that’s our greater opportunity and obligation, even above and beyond figuring out how it makes us more competent in delivering our own work. All right, so we definitely could keep going, but we are approaching the bounds of our limits for this conversation, our time limit. I do want to give both of you the opportunity to share where the resources we’ve been talking about can be found and how best to find you. If folks have some follow up questions or would like to engage further on either of these things. Shellie, where’s the best place to find you and how can people access the AI guidance and opinions database that you all are crowdsourcing?
Shellie Reid: Certainly, I can be reached through my email at [email protected] and the database is at lsntap.org on our resources tab, click on AI and you’ll find it.
Cat Moon: Awesome. Thank you. Thank you. I have it bookmarked. And Sam, how can people find you and all of the great content you are creating?
Sam Harden: So I have a Substack where I tend to just dump a lot of my thoughts. It’s samharden.substack.com. It’s called Team ‘Do Something’. And I named it that because I have always considered myself on the team that wants to do something, taking action and actually moving the ball forward. You can also email me at [email protected] is my official work email and then my personal email is [email protected]. I’m also on LinkedIn, but kind of infrequently.
Cat Moon: And Shellie, did you mention LinkedIn? Thank you, Sam, but I do want to promote Shellie on LinkedIn as well. You can find her there.
(00:40:04)
Shellie Reid: I am on LinkedIn.
Cat Moon: Yeah, you share great content I find so much. Well, I learned so much from both of you. Thank you both for joining me and sharing with Talk Justice listeners about the work you are doing. And I feel like we could convene again in a few months and have a whole new set of things to talk about, so maybe we’ll have the opportunity to do that. But for now, thank you both very much. Grateful to you for talking with me.
Shellie Reid: Thanks Cat.
Sam Harden: Yeah, thank you so much.
Cat Moon: Many thanks to Shelly and Sam for joining in this conversation. I hope that folks will check out the AI guidance database that Shellie has created and contribute something if you know of a resource she can add, and I highly recommend you follow Sam’s Substack team ‘Do Something’ for his insightful commentary on how the profession is reacting to generative AI, as well as insight into how we might approach using the technology to help more people help themselves with their legal problems. And many thanks to you for listening to this episode. Talk Justice is brought to you by Legal Services Corporation and Legal Talk Network.
[Music]
Outro: Podcast guest speakers’ views, thoughts and opinions are solely their own and do not necessarily represent the Legal Services Corporation’s views, thoughts or opinions. The information and guidance discussed in this podcast are provided for informational purposes only and should not be construed as legal advice. You should not make decisions based on this podcast content without seeking legal or other professional advice.
[Music]
Notify me when there’s a new episode!
Talk Justice, An LSC Podcast |
In each episode of Talk Justice, An LSC Podcast, we will explore ways to expand access to justice and illustrate why it is important to the legal community, business, government and the general public.