AFUA BRUCE is a leading public interest technologist who works at the intersection of technology, policy, and...
Kevin De Liban is the Founder of TechTonic Justice, which fights alongside low-income people left behind by...
Keith Porcaro is the Reuben Everett Senior Lecturing Fellow at Duke Law School, where he teaches classes...
Jason is a consultant on access to justice and technology issues at the World Bank; the editor...
Published: | April 8, 2025 |
Podcast: | Talk Justice, An LSC Podcast |
Category: | Access to Justice , Legal Technology |
Legal and tech experts come together on Talk Justice to discuss what generative AI is, how the growth of algorithmic decision-making can harm low-income people, and the need for people and institutions implementing AI to develop robust methodologies.
Afua Bruce:
I think that what is important to remember as we talk about a greater use and greater adoption of AI tools is that AI is ultimately still designed by people. Even though AI learns from learns, patterns from data, the data is provided by
Jason Tashea:
People equal access to justice is a core American value. In each episode of Talk Justice an An LSC Podcast, we’ll explore ways to expand access to justice and illustrate why it is important to the legal community, business government, and the General Public Talk. Justice is sponsored by the Leaders Council of the Legal Services Corporation. Hello and welcome to Talk Justice. I’m your host, Jason Tashea. We hear a lot of promises from AI developers. It’s going to pioneer medical breakthroughs. It’s going to make government more efficient and it’s going to cure the loneliness Epidemic inherent to these promises, at least in part, is hype, A lot of excitement that is yet to be fully checked by reality. In some ways, our current AI moment feels a lot like what we heard about blockchain six or seven years ago. Today I’m told AI is going to be different, but it often feels like a technology wanting to be implemented today that promises to be better tomorrow.
With that tension in mind, I’ve brought three experts together to help me cut through the hype and get to the core of ai, what it is, what its limitations are, and if AI is indeed our faded future, what can we be doing to make sure it works? To the benefit of all, to accomplish all of this in the next 30 minutes, I’m joined by Afua Bruce, a computer engineer and founder of a and B Advisory group. Her recent book is The Tech That Comes Next. Kevin De Liban is the founder of Tectonic Justice and formerly a legal aid attorney at Legal Aid of Arkansas. And Keith Porcaro is a senior lecturing fellow at Duke Law. Thank you all for being with us. Now, I wanted to start with you and I wanted to start very broadly and to get a sense of just what’s the difference between a fast smart computer and artificial intelligence.
Afua Bruce:
This is a great question and thank you both for asking it and for having me on the podcast today. I’m so happy to join you. The question of what’s the difference between AI and smart computers and fast computers is really important for understanding how we interact with some of the tools that are being rolled out. Smart computers, fast computers and AI systems have a lot of things in common. They both have a lot of computational power. They both have a lot of training data or they have a lot of data that’s there. They both have sophisticated algorithms that enable learning from the data and making decisions based on the data. What makes AI, especially generative AI different is that rather than reacting to and following a very defined set of instructions, a set of if then statements, essentially artificial intelligence systems can go beyond executing instructions quickly.
AI can learn patterns from data without explicitly being told what those patterns are. So when you feed in a lot of what’s called training data, these is the internet in the terms of large tools that are out there such as chat, GPT or clot or copilot, whether it is the documents itself that you give it, whether it is your own text that you give it, AI can learn from all of that and based on patterns that it itself identifies in that data, these generative AI systems can predict the next word, can predict the response based on what it itself has learned from all of that information. So AI systems are recognizing patterns that humans haven’t really explicitly told it. Are there in that data and
Jason Tashea:
Is this difference between AI and the super fast computers of your, is this a genuine break from what we were doing previously? I am trying to understand with the change that we’re seeing, how much of it is buzz and hype and how much of it is a genuine historic step forward?
Afua Bruce:
It is a genuine evolution. Can I say that in that supercomputers fast computing is a necessary building block for generative AI systems. Without the way those systems run, without the programming develops that we have, that we take advantage of on fast computers, on advanced decision making processes, advanced decision making systems, we couldn’t have generative ai. Generative AI is different in that it does a lot more prediction and generation than your traditional computing allows.
Jason Tashea:
I like that answer a lot. It definitely is an evolution, but the way the media covered it with the release of chat GPT-3 0.5 in 2022, it just felt like this came out of whole cloth, right? And that’s the way that things were covered,
Afua Bruce:
Right? And even AI itself, AI as a discipline has been around for 50 years. Generative AI, again is an evolution, but AI itself has also been around for a while.
Jason Tashea:
And so Kevin, I want to build on what AAH was telling us with this evolution that she’s talking about. I mentioned chat. GTI think is when generative AI really came into the public consciousness and then all of a sudden people’s imaginations, venture capital money and everybody else went into overdrive when it came to this technology. And if this is properly in evolution as AAH was saying then I’m curious from your perspective, how are you seeing this evolution put new pressure on the policy people and organizations that you work with?
Kevin De Liban:
So what’s interesting is for the evolution that Afua referenced the changes to our client communities, low income Americans aren’t as significant as you might think, given an evolution in technology in that states, landlords, employers for a long time have been employing algorithmic decision-making tools to make decisions about our client’s lives. And these problems have manifested now for a couple decades and even longer depending on how far back you want to go. But the evolution in technology has also brought about an evolution in the organization of societal resources and a legitimacy to this sort of automated decision-making that is intensifying the use of these technologies for basically unjust means. If you’re low income, any introduction of sort of AI or algorithmic decision-making is bad news for you. And that’s on every kind of aspect that we know of, whether it’s public benefit systems and government services, whether it’s your ability to get and keep a job under fair conditions, whether it’s your ability to get and keep housing or even having your kids have a fair opportunity at a good kid at 12 schooling education, all of that has been shifted through both the technological evolution but even greater the sort of societal evolution that’s come about of that.
Jason Tashea:
And so Keith, to bring you into the conversation, this idea of evolution of these programs getting better over time is one version of the continuum. The other version of the continuum is that there’s different types of ai. There’s some that we might not even notice anymore because they’ve become so embedded in the background of the things we use. So I’m curious what Kevin’s talking about is automated decision making, but of course we see AI and spam filters and autocorrect on our text message apps. How do you differentiate these different types of ai? When you’re thinking about the subject, how do you break these pieces out?
Keith Porcaro:
That’s a great question. At a really crude level, you might describe these machine learning and AI systems as trying to figure out what input factors contribute to some outcome, sort of a very, very high level. That’s one way to sort of think about machine learning. And so how we might distinguish a spam filter from something like chat GPT, you might do it on three axes. One is the sort of strategy or goal sort of what from a technical perspective, is this tool trying to accomplish what’s the outcome that they’re trying to produce? The second is the training data. So for a spam filter, the training data might be lots and lots and lots of email examples for chat GPT as of what pointed out it’s the whole internet. And then the last one is sort of the application or the use. And you might think about this in two different ways.
One is that these AI systems are chat. GBT is sort of the exception that proves the rule where they’re rarely sort of as front and center as they are in a chat bot. Typically they’re invisible or they’re deployed in the context of some bigger product. So your spam filter is built into your email provider. And then the second one is, and I think cuts to what Kevin was saying especially is that it’s the social context that these tools are being deployed in and that makes a big difference even if the underlying AI system is the same. One example that was in the news recently was UnitedHealthcare took a algorithm that was used to estimate the severity of mental illness and used it instead moved it from a triage tool to a tool for denying coverage. So it’s the same algorithm, but it’s being deployed in a different context. And as Kevin is pointing out, that tends to have a very different set of effects for the people who it’s being pointed at.
Jason Tashea:
As I was putting this show together, I think the goal in my head was that I really wanted to push past the advertising fluff around artificial intelligence and really get the shape and the limitations of the technology as it is now, as well as the challenges that both people deploying it also being impacted by it are going to face. And this morning newsletter from Arizona State came into my inbox about the future of AI is I think every newsletter is required to do these days. And the interesting thing that it said in its header was that to get AI systems into more places, we as people need to trust AI systems. And this idea of trustworthy AI has been going on for a while, but what struck me about the newsletter I was reading this morning was that this puts the onus on people to become more trustworthy of software and not for software to become better. So before we jump into in the weeds discussion around limitations and challenges of this technology, I want to ask each of you and Afu, I’ll start with you to flip the yardstick on this. Are these systems deserving of human trust at this time?
Afua Bruce:
I think all technology is interesting in that it can do so much both good and bad, right? All of the different technology systems that we have. I mean think of your phone depending on the manufacturer of your phone, you are probably getting updates weekly, monthly, sometimes it feels like daily or hourly. And that is because the software behind it is not perfect and people are constantly fixing bugs around that. We have decided to accept that level of uncertainty and that level of imperfection in that type of software, in how we engage those tools and how we engage phones and how we use phones. I think AI tools are similar in that they are not perfect. They make mistakes, they hallucinate as the technical term. AI systems are heavily dependent on the training data, which depending on the source can be pristine and perfect. That is rare if ever the case or can be an incredibly biased can be somewhere in between.
I think that what is important to remember as we talk about more a greater use and greater adoption of AI tools is that AI is ultimately still designed by people. Even though AI learns patterns from data, the data is provided by people, the parameters by which the AI system can learn are also provided by people. The ways AI tools are governed and people use them in decision making as some of my coess today here have already articulated is also decided by people. And so I put the onus more on the designers and the people making decisions about how and when AI tools are being used rather than individual end users.
Jason Tashea:
Keith, what about you? Are these systems deserving of our human trust?
Keith Porcaro:
I would echo what APU has said in that in some ways this is the wrong question. A danger of these AI systems is that they’re promising capability, but the harm that actually is visited upon people comes when those services or those products are deployed in ways that impact ordinary people, whether it’s to make decisions about what care somebody is getting or what benefits or somebody is getting. And so to me, the sort of question at the heart of this then becomes, well, what do people who are using AI who are in these positions of power and these positions of influencer and these positions of care, what do they owe the people who they have power over or the people who they’re providing care to? And I don’t think that AI fundamentally changes those obligations. What it does do perhaps is change who feels like they’re able to provide those services, who’s able to hold themselves out as having legal expertise or health expertise or to make adjudications about benefits. And people may be holding themselves out as being able to provide those critical services without sort of adapting the obligations or adopting the obligations of a lawyer or a doctor or a care provider or a government.
Jason Tashea:
I like that. Kevin, what about you? In my attempt at a litmus test question to kick off this conversation, where do you fall?
Kevin De Liban:
Well, it hearkens back to something you said earlier, which is cutting through the hype. Even the notion that you’re framing this in terms of trust is one aspect of the hype. What this is is risk. And what you’re doing is you’re not asking people to be more trusting. You’re asking them to actually accept more risk of all the harms that are attendant with ai, whether that is AI that works properly to cut benefits or restricts opportunities or whether it’s AI that is broken in some sort of way that ends up producing outcomes that it wasn’t designed to produce. And you’re shifting the burden of risk to the people who oftentimes are the least able to bear it and who face the most immediate consequences for when it is harmful. And so I think that is one aspect of the framing that doesn’t get interrogated a lot of times enough is what’s happening when somebody’s asking you to do these things like put trust in the system or allow things to work out the kinks or anything else is that meanwhile there are real human consequences that are there that are intolerable, and maybe we’ll talk about this later, but because there are broken accountability mechanisms around this technology, especially when it comes to making decisions using the technology to make decisions, there aren’t sufficient legal market or political accountability mechanisms to really dissuade bad stuff from happening.
And if we know bad stuff is going to happen, then trust is definitely not warranted. If anything, increased skepticism and the burden of proving that your product is worthy of being used should be shifted back to whoever’s trying to sell it or use it.
Jason Tashea:
Go ahead, Keith, please.
Keith Porcaro:
One of the things that is interesting about studies of trust in tech is that the sort of trust curve that people have with tech is very different from the sort of curve that people have when it comes to trusting another person. So when it comes to trusting another person, your trust initially with a stranger might start out low and sort slowly build up over time and it’s going to be modulated depending on the sort of experiences that you have with this person and the faith that you put in them with something like tech. And a lot of the research on this is still relatively early, but one way to think about it is that if the technology looks modern, if it looks slick, the trust often starts out very high, but it’s much more brittle. So as soon as something looks off or looks wrong, people will grow frustrated and they’ll sort of abandon it. And so to Kevin’s point, I think there’s a real risk in this sort of hype cycle where we’re going to get a lot of hype. You get a lot of people who are impressed by this sort of slick new technology, and then there’s maybe a cliff coming where people are realizing that it doesn’t do everything that they want or they’re relying on it in a way that lets them down that might be difficult to recover from both for the tech and for the institutions that are deploying that tech.
Jason Tashea:
That’s interesting. I think kind of leads into one of the things I was really excited about talking about today, Afua the other myself and the other two guests on this call, we all have jds. Our brains got broken in a particular way. And so it’s always exciting to get somebody from not that broken world to come in and tell us what’s actually going on. So I was excited about this angle of talking about the limitations on the technical side and from my personal experience, for every 10 articles I see about the major promises that come out of ai, there’s one article that’s like, wait a second, let’s hold the brakes. There’s a lot of technical stuff that has to prove true for this technology to really hit all of these high benchmarks that people are saying that it can meet. And specifically I’m talking about large language models, the models that underpin things like chat, GPT, really where the consumer cutting edge is right now of artificial intelligence.
And from what I understand from a technical perspective, there’s three major factors that all have to prove out for these tools to really sing. And I wanted to get your insights on whether or not these are limitations and where we’re at in the process of these things being proven out. So from what I understand, and please feel free to correct my lawyer understanding of large language models, but one is data. You need a ton of data. It’s the fuel that makes these engines run. That’s both an issue of quantity and quality as you pointed out in your first answer. Second is compute. Just the ability to run these systems not only takes a lot of chips, but it takes a lot of energies, energies to do so. And then last is simply scaling these models up. The idea that you can put more and more requirements on a large language model means that it will just get bigger and better as it goes. And so I’m curious if these things, if we get more energy, more data, bigger models, do we just get better AI or are there limitations that we should be considering when it comes to the technical development of these systems?
Afua Bruce:
Yes, both things are true. All things are true. So there are limitations when it comes to large language models and generative AI generally in how it is done and how it works. Large language models, especially generative AI does require a lot of energy to run. It requires a lot of computing power to run and to be effective, it requires a lot of access to the right computing environments and to the right computer systems. Similar things could also be said for traditional computing systems as well, but it’s certainly accelerated when we talk about artificial intelligence. I think that some of the things that are interesting to think about though are the idea of more specialized large language models, which you see in some instances. So if you want to build a large language model about a particular piece of the law, for example, or a particular use case around helping small holder farmers in African countries, in several East African countries, for example, make more informed decisions about when to plant what disease management looks like and more.
You might not need something trained on the entire internet and everything available, but a smaller subset of that. And so building some smaller, more targeted large language models is an option. I think also we have seen different cases around the world of people saying, what might it look like to build differently with some of these different constraints? And so even China’s deep seek that released earlier this year used less computing power, used slower chips and other information. So I think it is possible if we change some of our design constraints to develop different ways of creating artificial intelligence systems. That said, the overwhelming majority of ways that people are creating them is very power intensive and very reliant on large amounts of data, large amounts of power, which does make it a bit less accessible for a wide variety of people to sort of play and develop tools in this space.
Jason Tashea:
That’s interesting. And one of the things you talked about was this idea of specialized oms, one that’s targeted at a specific topic. And Keith, I wanted to dig a little deeper then on the data issue. If we were talking to legal aid folks, which are the people that listen to this podcast, they’re probably interested in legal OMS and how they could be applied to either their clients or to their work. And I’m curious what you’re seeing because in the legal space from my work, I see legal data locked up in two different ways. One is that it’s in government systems that are messy, they’re disorganized, you can’t pull the data out very easily for a number of different reasons, or it’s legal data that’s held in primarily a duopoly of private vendors, your Lexuses and your Westlaw. And I’m curious if this is a drag on building specialized LLMs in the space, and if so, what’s the implications of that?
Keith Porcaro:
I think right now the bigger drag on building these specialized LLMs is frankly money and access to these chips. You’re seeing a few examples of legal industry targeted AI products where they’re sort of targeting two things. One is fine tuning, so where you’re able to collect sort of a smaller amount of data to guide a foundation model toward a more specific application. So it still requires a lot of data, but it doesn’t require an enormous amount of data as the initial training run does. And the second is you’re starting to see custom embeddings, so sort of another layer on top of a large language model to help it interpret text. And that’s the approach that something like say Harvey has taken in addition to getting premier or first in line access to open AI’s foundation models. So I think those are the sort of big two obstacles.
Certainly there are all kinds of useful applications for unlocking data about the law, but the scale of data that we’re talking about for large language models still sort of well outstrips the scale of data that’s locked behind these paywalls. So just to sort of give one example, the entire US code is about 22 million words. The sort of training corpus for GPT-3 was about a trillion words. So even if we’re saying, okay, we need all this legal data, we need some access here, we’re still going to be relying a lot on the open internet. The last thing that I’ll say on this is that when it comes to building AI products that have legal utility, what you’re seeing in law but also in other fields is that it’s not enough just to have the large language model. You usually have to have some other data models that the large language model can interact with. For example, ization would be the sort of lawyer example of this. Having some data model that looks at how cases have an effect on each other, where it might not be obvious to pull that out of an LLM, but having those two things work together make for something that’s really powerful. And that’s something where the duopoly, as you pointed, that type of information is still locked up or it’s still proprietary or it’s still in books somewhere and that’s a little bit harder to get.
Jason Tashea:
Okay, so a handful of different limitations as you see it in regards to more specialized LLMs in the legal space. Regardless on where those systems are going though, Kevin, your work has run headlong into these systems already been deployed in the world. Your work now is based on your work as a legal aid attorney in Arkansas coming in contact with the zeitgeist. So I was curious if you could tell us a little bit about what your case was about and where that failure was as you saw it.
Kevin De Liban:
Sure. We’ll go back to 2016 when even the word algorithm wasn’t well known popularly and had to be explained regularly by journalists covering this. For example. So about this point, I was about four and a half years into my legal aid career and I worked significantly in public benefits issues. And one of those programs of course is Medicaid. And in January of 2016, I got around two dozen calls, myself and my colleagues got around two dozen 30 plus calls from clients with an issue that in the previous four or five years, we had gotten a total of maybe a dozen or two dozen calls in five years on. And what it was is a client’s on a home and community-based services program. This is where Medicaid pays for a caregiver to come into your home to help you do care tasks that otherwise you would have to have done for you or with you in a nursing facility or other institution.
People were calling because their care was getting drastically cut. The same nurse from the state agency that came out last year would come out, they’d ask you the same questions. Your condition probably hadn’t gotten any better, particularly if you had cerebral palsy or quadriplegia or other similar conditions. And yet at the end of that interview, your hours were getting cut drastically from the maximum in Arkansas of eight. It’s higher in many other states, but that’s Arkansas’s maximum to something like four or five hours a day of care. And what this meant was devastating human suffering, people lying in their own waist, people getting bedsores from not being turned, people being totally shut out of community and missing physical therapy appointments that kept them able to move the limbs that they still had control over, just horrific human suffering. And there was no explanation for why the state hadn’t announced anything.
And of course my clients were all concerned like, wait, nothing’s gotten better, what’s happening to me? So they would ask the nurse, why are you cutting my care? And the state nurse would say to, again, we had dozens of these iterations of these conversations. It’s not me, it’s the computer. And we heard enough of this that we thought something was funny and we started investigating. And sure enough, the state had implemented an algorithmic decision-making tool to decide how much care to actually give folks. And the best case scenario was these drastic cuts. And so that led us to a three year campaign of federal and state court litigation, a ton of community education and outreach. And ultimately we succeeded. We succeeded in court. And the legislature, thanks to the advocacy of our client communities, their families and other folks involved, was convinced that this was too cruel and abandoned the system for another system that had a separate set of problems we also had to deal with. But it was somewhat different than this first system.
Jason Tashea:
You took the litigation path on that particular issue. Then now today you run this group called Tectonic Justice. And so moving into the third act of today’s show and talking about, well, what can we do? We’ve talked about the limitations and some of the harms. You focused now on training legal aid attorneys in the space. What do you focus on in these trainings? What do you find is where the biggest knowledge gaps lie?
Kevin De Liban:
Yeah, I mean, one of the ways that AI operates is by its nature is in secretive ways. It’s not obvious that it’s being used. And even when you know it’s being used, it’s not at all obvious how it’s working. And so training folks to identify where it’s showing up is a huge first step in pushing the fight forward. So tectonic justice exists to fight the ground level harms that AI is causing low income communities. So all of our legal aid communities are going to be facing this in all sorts of issue areas. My particular expertise is in public benefits, but I’ve seen this also in employment, in housing and child welfare, in education settings. And so we’re training people to identify it, training people how to fight it, and then also providing technical assistance, strategic assistance so that we can help with legal theories, we can talk about what public education looks like, we can talk about community activation, certainly within the constraints of any sort of permissible activities under federal regulations. We do all this to help people get ready to advocate, to fight against this because it is bringing harms to the communities we serve at scales that simply were impossible with sort of analog pre AI methods.
Jason Tashea:
And so to compliment that, Keith, one of the things that you do at Duke is that you work with students to do audits of these types of systems, including I think you’re working on a current one with North Carolina Legal Aid. I don’t think that audit is complete, but I’m hoping you can tell us about the work that you’re doing here, the questions that you’re trying to ask and the way that you’re poking and prodding at these systems and what you hope that to accomplish.
Keith Porcaro:
Yeah, so in some ways, a lot of my research and work focuses on not quite the mirror image of what Kevin is doing, but a little bit of a different angle where there are organizations who are trying to use AI systems to deliver more customized legal information or to deliver care or deliver things that look or seem like advice. And at sort of a basic level, my feeling is I have no idea if any of it is going to work, but I think it’s really important that we both as a field but also as professionals with responsibility to our client, develop really robust methodologies to figure out if any of this is actually working. And so we are doing right now a audit of a tool that Legal Aid of North Carolina has deployed to deliver legal information to users. And what we’re trying to do is really stand on a lot of the really great work that’s been done thinking about algorithmic auditing beyond legal, where we’re not just looking at does this tool produce correct answers or does it hallucinate?
Those are important, we’re looking at that, but we’re also looking at the entire decision to deploy a tool like this in the context of the other resources in the other work that Legal Aid North Carolina is doing or is providing. Because I think when there’s a temptation with tools like this and with fancy AI systems to say, okay, well we should jump straight to that, we’ll sort of skip all of these other steps. But whether it’s a court or whether it’s a government or whether it’s a legal aid organization, they often have a lot of work to do that’s still necessary to do in terms of improving your information architecture or making your resources more accessible or thinking about what are the actual values that you need an algorithmic tool to embody, and how can you make sure that the tool or the decision making system, whether it’s on paper or in an AI system, is actually embodying those values. So we’re trying to help Legal Aid of North Carolina do both and figure out both. And it’s still, we’re probably a month or two away from bringing out our first audit, but our hope is to sort of do that and also help the legal field think about a methodology for doing this type of work in the future.
Jason Tashea:
So it sounds like a publication is forthcoming because I’m sure people listening to this would be interested in both your methodology and what you find.
Keith Porcaro:
Yes. Our hope is to have some public version of this come out over the summer. I don’t think that we’re going to find anything that’s so scandalous that nobody will want it to be published, but maybe if I put it into the universe like that, then it will definitely be out in the summer.
Jason Tashea:
Okay. Well now we can hold you to it. I think implicit, and something that Keith was just saying is that AI has an obfuscating power. I think technology often has an obfuscating power to less sexy, less glossy proven solutions that we already know work because we ran the randomized controlled trials and we know the thing does the thing and we don’t need to automate it. And I’ll let anyone jump in on this one because I think you all probably have a perspective on this, but how do you think about helping organizations not get distracted by the shiny thing for a better, less shiny solution?
Afua Bruce:
When I talk to organizations and I work with a number of social impact organizations, I always ask people to do a couple of things. One is to remember their mission. It sounds very elementary when we are having a conversation here, but it is always actually, it’s not surprising to me anymore at this point when I’m in a room with people who are saying, we have big strategic goals, we have talked to our donors, we have heard from our board, we have heard the vendors about where we need to go, and this is so great. I like to just say, what is the goal? Why do you exist and what are you trying to do? So really getting back to what is the goal. The second thing I tell people to keep in mind is that you have to design with the end in mind. And so seeing a cool flashy AI tool or frankly any tech tool, they can look really pretty on the shelf.
They can look really great in that demo. Do you have the money to acquire it? Do you have the money and time to train people on how to actually use it? And do you have the money and time to update it every time it is updated and to make sure people can continue to get trained on it. So thinking about those two things. And then the third piece is really what are your stakeholder conversations going to look like? Certainly your staff, if you’re at a legal clinic, your staff, but it is also with the people who are coming into your office. If they’re aware that you’re using AI tools, even if they aren’t, what does that communication look like? What does it mean to be transparent based on your organization’s values? What does it mean to absorb risk? Because many of these tools are great, and even though they’re sold in a way that says, this is a wonderful tool that works, if it doesn’t work, it is often you, the user, the organization who is then liable for making a decision and implementing decisions based on a hallucination. So what does that look like? So how are you also having those conversations with your stakeholders, with your communities, both internal to your organization and external to your organization? So why are you here? Can you actually fund and can your organization absorb some of the organizational training around this? And then how are you engaging with your internal and external community or what I talk to people about in making decisions about when and if to use ai?
Jason Tashea:
I think that’s a great answer and it hits on a bunch of things. And so I want to drill down maybe a little bit with you. One of the things that you help people with is you work with funders, you try to help them improve their process to be able to improve the space that they’re operating in. The host of this very podcast, the Legal Services Corporation is the largest funder of legal aid technology projects in the United States. So I’m curious how can applications, application, process evaluation, rubrics, and even communication framing around the funding of these technologies be improved so that we get further away from some of the worst implications that we talked about in the last half hour?
Afua Bruce:
So I’d say the first piece is education. I think that many times, and again, I’m an engineer by training. We try to be nice people, we sometimes are. And I think what that can translate to when it comes to technology is a perception that technology decisions should be left to technologists and that is not actually practical for living in the world. You can’t just go down to a river of technology and scoop up the perfect algorithm and then sprinkle it over
Jason Tashea:
Your
Afua Bruce:
Legal field to extend this metaphor that I’m recognizing is quickly falling apart. That’s great. That’s great. Thank you. And so I think it is really important for funders to create spaces and to allow funding that allows for technologists and non technologists to have the substantive conversations about why are we here? What is this actually going to look like and and how are decisions going to be made based on that? I think that’s really important. The second thing that I think funders can do when they fund a number of organizations that work in similar spaces is fund independent projects for some of that evaluation, those best cases for some of those use cases as well, because it is a lot to ask for every legal clinic to do its own independent evaluation. I’m not a lawyer. I imagine you all are quite busy, and that is also not your skill. And so asking every legal clinic to do its own evaluation seems rather futile. So if funders who tend to fund in the same space can start to fund some of these evaluation criteria, can fund some of the auditing that I know Keith and others are doing and can fund some of the creation of use cases, both what worked well and what didn’t work. I think those two things are also really great ways for funders to play to improve the overall space.
Jason Tashea:
And I know some of those things LSC is doing, but I do really love the idea of as someone who works in court technology, this idea of pooling resources to do that back of housework, to do the evaluations that benefits everybody. Something’s going to work in Arkansas, then it’s probably going to work in Oregon, even if implementation may look somewhat different. All three of you. This was a great conversation and I appreciate you sharing just a fraction of all of the work that you’re doing in this space. And I’m sure that our listeners are going to be curious about how they can go learn more, maybe keep this conversation going with you. So before I and today’s show, I want to give you all an opportunity to share the best way to keep in contact with you or to learn more about your work or find even your book. Afua. Let’s start with you.
Afua Bruce:
Sure. My name is Afua Bruce, and you can find me on LinkedIn. Please reach out or contact me directly. The website is AMB advisory.com and the book, the tech that comes next is sold at whatever local bookstore you would like.
Jason Tashea:
And I should also note that Afua gave the keynote at the L-S-C-I-T-C conference this year, and that video is on YouTube. If you want to hear more about her thoughts about change management as well as AI governance, Kevin, where can people get in contact with you?
Kevin De Liban:
Sure. You can reach me on LinkedIn. I’m also at Tectonic Justice’s website, which is T-E-C-H-T-O-N, IC justice.org. There’s a lot of great resources there. There’s contact information for me. Please do reach out. I’m happy to talk with anybody who’s facing this stuff with their clients or if you want to take the lessons that we’ve learned from how it’s deployed against clients when you’re considering it for your own internal uses, I’m happy to talk about that as well.
Jason Tashea:
And Keith, for people hovering over the refresh button to find that audit later this summer, where would they be doing that?
Keith Porcaro:
You can find me on LinkedIn or Blue Sky if you want to see really infrequent posts about my work. And if you want to reach out directly, you can look me up on Duke Swift website. I won’t try to spell my last name here, but if you are interested, especially if you’re a legal aid organization, is thinking about an algorithmic or an AI system and you’re interested in audits, we’re hoping to start doing this at least once a year, so please do reach out.
Jason Tashea:
Wonderful. Well, with that, I’d like to thank Afua, Keith and Kevin for being with us today on Talk Justice. For links to what we discussed today, check out our show notes. If you liked what you heard, please subscribe and leave us a review wherever you listen to podcasts. I’m Jason Tache and for everyone here at Talk Justice, thank you for listening. Podcast. Guest speakers views, thoughts and opinions are solely their own and do not necessarily represent the legal services corporations views, thoughts, or opinions. The information and guidance discussed in this podcast are provided for informational purposes only, and should not be construed as legal advice. You should not make decisions based on this podcast content without seeking legal or other professional advice.
Notify me when there’s a new episode!
![]() |
Talk Justice, An LSC Podcast |
Join us as we explore innovative ways to expand access to justice, bringing together legal experts, technologists, business leaders, community organizers and government officials for thoughtful conversations about ending the access-to-justice crisis.