Sateesh Nori is a lawyer, law professor, and author. For twenty years, he represented tenants across New...
Lisa Colpoys is an experienced public service leader and innovator who has spent her career working to...
As Professor of the Practice and Co-Director of the Program on Law & Innovation and the Vanderbilt...
Published: | January 28, 2025 |
Podcast: | Talk Justice, An LSC Podcast |
Category: | Access to Justice , Legal Technology |
Recorded live from the 25th annual Innovations in Technology Conference (ITC), legal tech experts discuss their artificial intelligence (AI) projects and AI’s future potential for expanding legal services on Talk Justice. The live podcast recording at ITC took place in Phoenix, Arizona on January 14. At the conference, more than 700 professionals from legal, tech and other backgrounds gathered to attend panels and join conversations about access to justice, legal aid, courts and the evolving role of technology. AI was a major theme at the conference, where more than 300 people also attended a pre-conference AI Summit.
Sateesh Nori:
And for the first time in my career, I’m seeing a path forward out of the kind of mess of intake and triage that we’re all stuck in a way forward that allows us to do that type of work that only we can do.
Announcer:
Equal access to justice is a core American value. In each episode of Talk Justice an An LSC Podcast, we’ll explore ways to expand access to justice and illustrate why it is important to the legal community, business government, and the General Public Talk. Justice is sponsored by the Leaders Council of the Legal Services Corporation.
Cat Moon:
We’re here to explore a theme that I think is running throughout this year’s conference, and it’s not even a theme, it’s an energy, right? I feel like this is an energy that is just flowing through right now, the legal profession and this community. And today we have two people who are supercharged when it comes to this energy and the work they are doing. I can’t wait for you to hear about the things that they are doing because I think this is going to inspire and inform. So let’s get to that. Why don’t we? A few months ago I talked to the researchers who conducted a study in legal aid organizations. They put in the hands of people, about a hundred people, access to the paid versions of chat GPT, and just let people go to see what they could do, what they could build, and the results were pretty extraordinary. They found that those, especially those who had some help, some concierge service, learning how to use the tools and deploy them effectively in their work, did some really cool stuff. So this experiment, I think, inspired others, especially in the two jurisdictions where these folks live and work. To take that a lot further, and I feel like that is exactly what Lisa and Sateesh are doing. And why don’t we tell folks what you’re doing. Okay, Lisa, tell us what you are doing.
Lisa Colpoys:
I’ll start with what led me to this moment. I feel like an OG at this conference. I’ve been coming to this conference since the second one, and I’ve been working in innovative public service organizations for almost 30 years, since the mid nineties when I worked at Carpals Legal Aid in Chicago, which then giving legal advice over the phone was an innovation. And at the turn of the century, I helped launch Illinois Legal Aid online and was lucky to lead that organization for almost 16 years. Have also worked for the Illinois courts standing up Illinois Court help, which is a customer support platform for court users to get answers to their questions about going to court. And recently worked with the Stanford Law School filing fairness project to help courts modernize their forms and filing systems. So all of that has led me here where I am serving as a senior consultant to the Lawyers Trust Fund of Illinois, which is IOLTA program in Illinois on their AI initiative.
I’ve been doing this since September. And the goals of the AI initiative are to help support legal aid organizations and legal aid lawyers in Illinois as they begin to learn about and explore AI technology and tools and as they begin to determine how they’re going to deploy them in their organizations and in their legal practices. So since September, some of the things that we’ve done well, we did a lot of discovery work in the fall. We had discussions with leaders of, there’s 55 grantees that Lawyers Trust Fund has. So discussions with most of those, we ran a survey of the AI perspectives and use by legal aid lawyers and staff members in Illinois, really gathered a lot of information to be able to determine what this program will look like, but also to set a baseline to collect some data so that we know where we’re starting.
Some things we did initially to set up some support channels include developing weekly virtual AI office hours. I had to cancel them today because I’m here, but every Tuesday at 12 o’clock, any legal aid staff member in Illinois can log on to a virtual AI office hours where at some time they will just be drop ins where they can ask questions, share what they’re doing. Sometimes there’ll be show and tells where people will describe what types of AI projects they’re working on, and sometimes we have experts or vendors show up and give presentations about their work. We also have an AI discussion group that’s statewide. We’re with about 150 people who’ve signed up already so they can share their work and learn about ai. And then recently we launched a five webinar series, which is going on for the next five weeks. And the topics that we’re covering are really an introduction to ai.
What is ai? How do you effectively communicate or prompt AI tools? What is an AI use policy? Why do you need one and how do you get one? And the ethics of AI in a legal aid practice, we hope to continue to provide important programming for legal aid staff members in Illinois as they begin on their AI journeys. And some things we’d like to do in the future include not only training, but a resource guide and some peer learning where people can work together across organizations to develop some solutions. The last thing I’ll mention that we’re working on is I’m working with pro bono council to develop guidance for organizations and organizational leaders as they begin to engage with AI vendors. What I’ve heard is that they’re not sure kind of what’s lurking in the shadows with ai, even if they do have experience negotiating tech contracts. And so working with pro bono counsel to create guidance and a checklist. So when legal aid leaders begin to purchase AI technology, they’ll have a roadmap and a checklist they can use. So that’s kind of what we’re doing and have done over the last few months. You have been very busy.
Cat Moon:
Yeah, a little bit. I want to highlight something before I pass the mic to Sateesh, and that is the fact that you started by doing discovery, right? You did some human-centered research first to figure out what the needs were in the community you’re serving. And I feel like that is so important. We’re going to talk a little bit later about steps anyone in the room can take to be doing some of these things, and I identify that as really a foundational step, right? Understanding what people need and want and where they are as a foundational matter. So I just wanted to highlight that. Well done. Alright, Sateesh, the mic is yours. Tell us what’s going on.
Sateesh Nori:
Well, it’s a real privilege and honor to be here. I feel like I have to pinch myself. I can’t believe I’m sitting here talking to all of you. So many of my mentors and heroes, I’ve been a legal services lawyer in New York City for 22 years. I’ve been on the front lines like many of you. It was my first job out of law school to be a staff attorney in the tenant unit at Brooklyn Legal Aid. I became a supervisor. I was a project director for 10 years in Queens. And during that time, I probably represented a thousand clients. And I can honestly say that I’ve never lost a case.
Well, I’ve come in second place many times, but while I’m proud of that fact, what really disheartens me and what probably brought me to this very seat today is that I probably turned away more than 10,000 people during that time. And that led in me a deep frustration with the way that we do our work. And I thought about what we do and how much of what we do is about the how of our work. How do we do intake? How do we translate documents and speak to clients in their native languages? How do we file papers in complex court filing systems? How do we put know your rights materials that are accessible to people out there for them? And I realized that how questions are taking up more than half of our time and energy, they’re burning us out. People are feeling demoralized and quitting because they’re not able to help people who here signed up for this work to do paperwork and to turn people away and to refer people out.
Nobody. We all came here. We’re all in this room because we want to help people. And so that led me on this journey of exploration. How can we do better? How can we help more people? How can we go back to the root of why we all exist, which is to honor the principle of equal access to justice? Well, technology, I’m really proud of the work that people in this room have done over the years to introduce technology and to think of technology as really a way to level the playing field between us and our adversaries between us and corporate law and the government and the das and the prosecutors and immigration. And now the last two years we have this new tool, it’s called ai. And I’m not a software engineer, but I trained myself and I played with the tools and I realized that there are so many things that we can incorporate into our work that helps us answer those how questions and let us focus on the what and the why are people being evicted?
Why are people being denied benefits? Why are people facing immigration consequences? Those are the interesting questions. Those are the questions that require our intelligence and creativity and skill. And for the first time in my career, I’m seeing a path forward out of the kind of mess of intake and triage that we’re all stuck in a way forward that allows us to do that type of work that only we can do. No one else can do that work. And so I’m really happy to be here to talk a little bit more about that kind of work.
Cat Moon:
So you are really busy as well. Lisa’s very busy. You are very busy. You guys are taking no prisoners here. You’re getting stuff done. So tell us what you’re rolling out Sateesh.
Sateesh Nori:
Sure. So I have this perspective on what’s happening in our space. First, as a legal aid lawyer, I still have clients. I represent tenants in New York City who are facing eviction. Even right now I work at a law school and I teach a tenant defense clinic. So I’ve really been able to see what law schools think about this AI revolution. And here’s a spoiler. They don’t think much of it. They’re not doing that much other than people like Cat Moon and others, people at Suffolk Law School and Stanford. But very few people in law schools, in the legal academic world are really thinking ahead about what’s coming. And finally, I recently was privileged enough to join a tech company called Just Tech. And my job title there is senior legal innovation strategist. That’s a title that chat GPT came up with, I’ll be honest. And so what am I doing there?
Well, I have the kind of opportunity to really inject these types of tools and my experience and share that with all of you and introduce techniques and methods and products that can help us turbocharge the work that we do. So for example, last week on January 7th, after 13 months of work, we launched a tool called Roxanne. Now everyone over 40 is thinking of the song by the police right now. And everyone under 40 has no idea whose sting is. Probably think he’s a bumblebee or something. So Roxanne AI is a tenant facing chatbot. It is a consumer facing tool, which is a big deal. We have a lot of issues to navigate around unauthorized practice of law and accuracy and trust. When we release something that goes directly to the consumer, we have no idea what’s going to happen. It’s only been a week. And what the tool does is it allows tenants to get information about how to address housing conditions in their apartments.
That seems so simple. Why is it so hard in New York City where we spend hundreds of millions of dollars on legal services every year for tenants to get help with repair issues? Well, it is really hard. One reason is the lawyers are focused on evictions. They don’t have time to address things like lack of heat or mold or lead paint. All of the nonprofits are kind of feeding cases to the lawyers and struggling with rent assistance and other similar social services issues. So for tenants to get help on basic questions like this means they have to get in line and they have to get in line behind people who have emergency cases, more emergent cases, so they’re not getting assistance. And what’s happening is kids are getting asthma, they’re getting lead poisoning. Elderly people are dying in the middle of winter because they don’t have heat.
And this is in New York City, this should never happen. And so a tool like Roxanne allows people to search within context. They can type in, I don’t have heat. What do I do in natural language? And they can get what I call actionable legal information. The UPL question is a whole other can of worms that you’re all going to hear about or have already heard about. But the reason that’s so important is it’s a new category between legal information and legal advice. Remember, UPL restricts us from giving legal advice or non-lawyers from giving legal advice. It’s a huge barrier to the type of technology that AI is and offers. So actionable legal information, I believe threads that needle between legal advice and legal information and gives people things that they can do to avoid bad consequences down the road. It’s an upstream solution to the legal problems that people may have.
And if you think about it more broadly, these types of upstream solutions are critically important to us. Why are there so many people lined up outside legal services offices seeking help? Because there’s no upstream solutions. Many of the problems they’re facing could have been resolved if they had information about how to address those problems earlier. But they don’t. And there isn’t any information or they can’t access it because we lawyers love to put text on screens. Nobody else likes that, right? People don’t access information that way. So Roxanne is one of the first steps that I think in putting this type of information in chat form, but chat isn’t the end of the game. One of the things about AI is you can use it in any kind of modality. You can make videos, you can make audio content, you can send text messages to people, you can reach them wherever they are in whatever language they speak, 24 hours a day, seven days a week, 365 days a year, and the cost is relatively low. This isn’t something that requires you to overhaul your tech stack or hire a bunch of new people or spend a lot of money. These are tools that are really affordable. In fact, if you have a phone with a screen, you probably already have these tools in your hand right now. So I’m really excited about what is yet to come. And I think it’s on all of us in this room to try these things out and to push this space forward with the goal of helping more people. That’s why we’re here.
Cat Moon:
Absolutely. And Sateesh, what you were describing with Roxanne, love the name by the way, being over 40 myself. But what you’re describing is proactive steps. So we often in the law, in the legal profession occupy a reactive space. So people line up outside the door to get help reacting to a situation that has happened. And so you identify this very clear use case. How can we use, develop this technology to empower people to be proactive, right? To avoid the worst of the legal problems in the first place. And I think that that is an amazing opportunity, but is also an incredible challenge because again, traditionally for over a hundred years, our entire profession has really been built on this mode of reactivity. And so it requires a mind shift. I identify that as both an opportunity, but an incredible challenge. And on the note of challenge, I just want to kind of identify elephants in the room. Metaphorically speaking, I would love to hear from both of you what you identify right now based on the journeys you’re on, bringing this technology into the hands of the people in this room. What challenges are you experiencing and do you identify? I think it’s important for us to talk about those so we can be proactive in dealing with them. And Lisa, I’m going to pass the mic back to you for that one. Alright,
Lisa Colpoys:
Well, like with any new innovation, change is hard. So this room is full of enthusiasts, but I can guarantee you back in their offices, wherever they office, there are rooms full of people who are not nearly as excited about using ai. So we need to manage this journey and we need to deploy change management strategies in order to move their AI journeys forward. Additionally, what we found in our survey that we ran in October of legal aid lawyers and staff members was that their number one concern about using AI is accuracy. So 70% of all respondents said that they were concerned about the accuracy of AI tools. So I think that is a big challenge to adoption. There are things we can do of course to help that, but those are some challenges that I see
Sateesh Nori:
To piggyback on that lawyers and people in the legal world have a hard time using things that they don’t understand. I think we pride ourselves in being very rational and logical and being able to understand the things that we’re using, the laws and regulations that we’re dealing with. And unfortunately, AI is very hard to understand. It’s a black box. Some people fear that it’s going to take over the world and we can’t really explain how it works. And here’s my take on that. It doesn’t matter. It doesn’t matter if we understand it. I don’t understand how the internet works. I’ll be honest, and I’m a legal technologist. When I send an email to Cat Moon, I can’t really explain how it gets to her for that matter. I don’t understand how indoor plumbing works, does anyone in the room? We just don’t think about it.
We accept that it works. It’s worked for our entire lives. It is reliable and it gets the job done. That’s how we should think about ai. We shouldn’t be hung up on, well, it’s a black box. I don’t know what’s inside. It’s scary. If I open it, what’s going to come out? Is it going to hurt me? Is it going to hurt my client? Let the professionals, and I’m not one of those people. Let the engineers, the software developers, the people worry about that problem. What we have to do is trust it, test it, verify it, make sure that it’s accurate for the purpose that we’re using it for. But let’s not get hung up on how it works, what’s inside it, the nuts and bolts of it. And that’s a real challenge for lawyers. Not just lawyers and professionals in the legal services world, people in law schools, people in corporate America.
One thing that I think is going to happen is corporate America is going to be decimated corporate law by ai. Things like the billable hour, our history. If you think about the way an associate in a big law firm works, they spend 20 hours, 30 hours, a hundred hours on a legal question, guess how long it takes AI to answer a similar legal question? 45 seconds. How do you bill 45 seconds, right? The entire house of cards of corporate law is about to come tumbling down. But here’s the good news for those same reasons that corporate laws and trouble legal services has a huge opportunity. The same tools that make the billable hour irrelevant free us from the type of work that takes up all our time and we can do so much more. So the real challenge is to trust the stuff. Try it out.
Take small steps in your personal life to use ai. One easy tip that I often give to my older relatives is open up your refrigerator, download chat, GPT, take a picture of your food in your refrigerator and ask it for recipes based on what’s in your refrigerator. It’s an amazing tool that way. And it’ll give you a list of a dozen recipes based on what’s in your pantry. And that’s just an example of little steps that you can take to build trust and then think broadly and creatively about the ways that you can incorporate a tool like that into your work. And the sole purpose of that would be to do more, to help more people, to be able to focus on the hard questions that you’re unable to deal with because you’re working on the easy questions, the repetitive questions over and over again.
Lisa Colpoys:
I think that we need to find the people who are the early adopters. We need to create a coalition of the willing people who are willing to try things out, get started, pilot solutions, just get going. These early adopters will start doing their work and others will eventually become interested. They’ll see the successes and they’ll piggyback on that. And so almost like a snowball rolling down a hill. I think that is one way that we address this change management issue.
Cat Moon:
Absolutely. And I believe at the AI summit on Sunday, many of these early adopters shared the work they’re doing for information for inspiration and undercurrent. I sense in everything that you both are describing is fear, right? So fear of how is this technology going to impact the way I’ve always done my work on a very personal level. And I think that is incredibly fair. We are professionals. We’ve invested heavily in our education. We’ve invested heavily in learning our craft and being really, really good at what we do. So fear of what this technology might mean, how we do our work, right? Fear for many of the economic model upon which the systems are built. And I think there’s an element of that too with funded organizations. If funding is based on the number of hours people spend doing certain kinds of work, and this technology undercuts that, then it’s not just rethinking the billable hour, but it’s rethinking how we acknowledge and fund value and service that is created for clients, right?
There is fear for some probably of their very job. I think that’s a valid concern. What you both are doing. What I strive to do with my students is replace that fear with curiosity. So it’s the experimentation. The early adopters you’re describing, Lisa, are the people who embrace curiosity over fear. And that’s so incredibly empowering, right? Because here’s the thing, this is a freight train coming. It’s not optional. We have no control over how this technology continues to be unleashed in the world. We can control how we decide to orient ourselves toward it as professionals, but that’s an amazing superpower. We have to replace the fear with curiosity, and that helps us flip the script. How do we flip the script?
Sateesh Nori:
There’s a great saying, it’s don’t be an enemy of the future. There’s nothing you can do about that. So I think a lot of us, whether we like it or not, can be enemies of the future. And there’s a big reason for that. We’re all lawyers. We are risk averse. That’s why we came into these jobs. We want to avoid bad things from happening to people, our clients especially. And so when we see something coming, our first instinct is to think what are the risks? And I often hear when I talk about AI or AI tools, well it’s not perfect. It messed up. It hallucinated that one lawyer in New York made up a case that could happen to me, that could happen to someone I’m supervising. It’s too risky. Instead, don’t compare these tools to the perfect, compare them to what’s available right now to people.
Think of the people you’ve turned away and ask yourself, is there some reason or some way I could have helped that person if I had more time, if I worked a little bit differently, if I adopted some of these tools. So really the baseline for me is zero. And a tool that’s 90% good looks pretty great compared to zero, it doesn’t look that great compared to 100. And that’s what we tend to do as lawyers, right? We look at the perfect and we make it the enemy of the good. And so we have to overcome that. And that’s part of change management. That’s part of taking baby steps. Ivy Ashton yesterday said, we have to hit singles with this stuff. We shouldn’t be looking to hit home runs or end the way that we do things immediately. We have to think in terms of baby steps and progress that is measurable and verifiable and we can bring everyone along with us. But the first step is to change our mindsets about how we think about the risks here. And to be open and to consider that the baseline is zero, right? People are getting no help. 92% of Americans civil legal problems are going unaddressed, 92%. So we can navel gaze and ponder and wonder about how AI works and compare it to the perfect, but people are not getting help. And that’s really what should drive us forward.
Cat Moon:
So I will add very quickly, and Lisa, I would love to hear from you. One just point to make actually a two-part point. One, we’re in this moment right now that this is new. It’s uncharted territory. We have this opportunity to shape and decide we have choices right now and the longer we take to naval gaze and to ponder the fewer choices we’re going to have. And so time really is of the essence in a way that again makes us uncomfortable. We like to ponder, we like to navel gaze, we like to go very carefully, but time is of the essence and we have the opportunity to make these amazing choices because it is all so new. And this is a time we may never have this opportunity again to really shape how this technology can help us achieve the mission of helping people with their legal problems, who otherwise would get no help helping the 92% of problems.
One final point, something just to be aware of. Lawyers don’t control who gets to use this technology. Any person can log into chat GPT and seek legal help and seek legal advice. And I don’t believe there’s going to be a time in which our profession shuts that down because chat, GPT is not licensed as a lawyer. And so I think it’s incumbent upon us to be really thoughtful right now about how we can make choices that ensure that people who log into these tools are going to have, they’re going to have a good experience, right? Not perfect, perfect is the enemy of the good, but no help versus really good help. That’s what we’re faced with. Alright, opportunities. Lisa, what opportunities do you see out there?
Lisa Colpoys:
I think that, and we’re really focusing on how AI can help legal aid organizations and legal aid lawyers do their work better, be more efficient, help more people. But I really think this is an opportunity for us to help improve the mental health of legal aid lawyers instead of being bogged down by the burdensome drudgery of filling out forms or the things that you didn’t go to law school to do. We can free up time by allowing AI to help with those things and do the things that only humans can do. The counseling, the collaboration, the trial work, all of the work that the AI isn’t going to be good at. But we can let the AI do the things that it is good at so we can have the time to do those things. And perhaps in what, when we do that, we’ll gain time back, more time with our families, less time at the office or your remote office, whatever that is. But I really think that this is an opportunity for us to help improve the mental health of our community here.
Cat Moon:
An amazing opportunity to consider all users in the system. We absolutely must be client focused, people focused. But while we are making the choices about how we’re going to use this technology and design solutions, let’s do it in ways that help us right here. Here. I absolutely fundamentally agree that should be a priority, Lisa. Alright, let’s do a little future casting, shall we? So the year is 2030, which by the way focuses only five years away, which sounds frigging crazy, right? Just mind boggling. And Lisa earlier referred to the turn of the century and my mind’s like, oh, the 19 hundreds. No, no, no. Yes, yes, it’s terrifying, but it’s 2030. What vision do the two of you have for how we are helping people, how we are serving, how we are doing our work? What is your vision for 2030? What does it look like? Paint us a picture, Sateesh.
Sateesh Nori:
Yeah. And first I just want to piggyback on the mental health issue. I think we have a mental health crisis in legal services right now. I’ve never seen more unhappy people working in legal services. I’ve never seen people quitting without having other jobs lined up before. So I think that’s an excellent point that Lisa made is why is this happening? We need to think about it. And one of the reasons is that the work is so tedious and burdensome and demoralizing because we’re turning away people and the work we’re doing for the people we’re representing is so tedious. But to answer your question, what I would say in 2030, which does seem incredible that it’s only five years away, is we’re going to see more specialized practice areas. We’re going to see a lot more smaller, more nimble legal services groups pop up. We’re going to see a lot more solo practitioners who can now afford to run a practice.
They don’t have as much overhead with billing and paperwork and office support. All of those things can be automated by ai. And that’s going to be good for the general person, the general consumer of legal services. They’re going to have more options for seeking help. The cost of those options is going to be much cheaper. And for us, I think we have to think more about neighborhood offices, smaller practices, more specialized practices. We have to move away from the general model of housing attorney and maybe focus on rent overcharge attorney or repair attorney or holdover, no lease attorney. We can really do that going forward and that’s going to help each of those people who are going to feel more skilled or develop their skills deeper and become experts and not generalists. And I think that’s the model that we have to embrace a place in which there are more options for work for people in our field and more opportunities for people who seek legal services to get that help because it’ll be more widely available, more specialized and cheaper.
Cat Moon:
Sounds like a win-win. Lisa,
Lisa Colpoys:
What is your vision for 2030? AI is changing so quickly. It’s really hard for me to think about what it’s going to look like in five years. So I did what I usually do when I want to brainstorm, and I asked Chad GPT this question. It gave me 10 things I’ll share the first three. The thing it said was we will have ubiquitous digital legal assistance. So think that AI powered chatbots or integration with all the everyday platforms that we’re using where people can access help. So that was the first thing. Second thing was enhanced self-help resources. So thinking of interactive tools and even personal knowledge bases. I like that. And then the third thing was seamless access to lawyers. So on demand consultations and even subscription models. So these are some of the things that Che GPT thought would come around in 2030. I think they sound pretty
Cat Moon:
Good. Well, the cool thing is really anything is possible right now that it’s really up to us. What can we imagine and what can we create? And I believe the folks in this room are the best situated and equipped to come up with these things and I think we would love to hear from you all. Does anyone have any questions? Anything you would like to share?
Audience Member:
Hi there. Thank you so much. I want to be very optimistic about the mental health revolution, that this is going to free up attorneys from the tedious work that AI can assist with. But I’m wondering if there’s a possibility of major funding changes in attitudes about people who are providing funding, whether it’s federal or state or private, that funders might say, well, why do you still need the same amount of money that you were getting five years ago? Why do you need to hire 10 attorneys? Why can’t you hire two attorneys and have chat GPT, do the rest of your work? Do you foresee the possibility of that particular negative consequence?
Sateesh Nori:
Yeah, that is a very good question. Why would our funders continue to fund us in the same way if they too see this technology as changing the way that we work and perhaps they’ll demand that we get less money and do more with less. That’s always the challenge of some funders is they keep asking us to do more with less every year. And the answer is, I think we have to be ahead of that question and say, here is how we are going to spend the money that you’re giving us. We may need 10 attorneys, but remember there’s a huge gap in the access to justice. 10 attorneys can now do twice as much work. We don’t need fewer attorneys. We need to be able to equip those attorneys to do more. Also, we can think of ways that we use funding and seek additional types of funding.
There will be new jobs that will be required. For example, if you have a rag bot, a bot that is built upon a particular knowledge base, you’re going to need people to maintain that knowledge base and update the information and go out into the community and find out what the needs are so that the knowledge base is accurate and up to date. These are new roles that we’re going to need. But it’s a very good question. I don’t have all the answers, but I think we have to be ahead of that question and we have to think about ways that funding is going to be used and new sources of funding can be demanded to meet that gap. I don’t think there is a strong argument that we need less money given that 92% of civil legal problems are going unaddressed. I think the argument should be not only do we need more money, we’re going to need new types of grants that will support this new world of work that we’re going to be doing. And we’re going to be using that money better. So keep it coming.
Lisa Colpoys:
And speaking of funding, I think we need to give a shout out to our IOLTA programs. The Lawyers Trust Fund in Illinois and the IOLA program in New York are both funding the work that Satit and I are doing. And so they’re really looking forward at how they can support legal aid organizations on this journey.
Cat Moon:
I will add in response to the question posed that this goes back to a point I was trying to make earlier about we still are in this period of time where our choices now can impact effects like that. And I think another reason why time really is of the essence for us to get ahead of things and then we are well positioned to shape new forms of funding, how funding is deployed, how grants are managed so that it fits with what this technology empowers us to do to help more people. It’s an incredible opportunity right now. More questions. You said something about we have a lot of opportunities right now. We’ll never have this many opportunities to shape things. Can you talk a little bit more about what you mean by that?
Notify me when there’s a new episode!
Talk Justice, An LSC Podcast |
In each episode of Talk Justice, An LSC Podcast, we will explore ways to expand access to justice and illustrate why it is important to the legal community, business, government and the general public.