Orly Lobel is the Warren Distinguished Professor of Law at the University of San Diego, the founding...
Professor Todd A. Berger joined the College of Law faculty at Syracuse University College of Law in...
Leah is a 3L at Columbia Law School where she’s focused on death penalty abolition, holding the...
Chay Rodriguez is the IT Communications and Engagement Manager at a prominent entertainment company by day and...
Published: | December 11, 2023 |
Podcast: | ABA Law Student Podcast |
Category: | Early Career and Law School , Legal Technology |
There’s no way to take a pass on tech competence. In your future as a lawyer, you have an ethical responsibility to understand and use technology in your practice, and today’s fastest growing tech is AI. Whether you’re an AI fan or perhaps a little scared of a robot takeover, this podcast will help you understand many of the latest AI trends and their impacts in the legal world.
Leah Haberman interviews Professor Orly Lobel, author of “The Equality Machine: Harnessing Tomorrow’s Technologies for a Brighter, More Inclusive Future”, to discuss AI, algorithms, current tools, and how to make sense of them all. There are and always will be positive and negative implications for AI uses, and our goal should be to use it for good.
Orly Lobel is the Warren Distinguished Professor of Law at the University of San Diego, the founding director of the Center for Employment and Labor Policy (CELP), and the award-winning author of several books and numerous articles.
Todd Berger (00:01):
Ai, we’ve heard so much of it in recent days. Maybe you’ve played with chat GPT or an image generation tool like Dolly or Midjourney, maybe some apps you use are touting new features powered by ai. Honestly, in a lot of ways it’s pretty fun and exciting, but it’s practicing attorneys. It’s clear that this isn’t just some novelty tool. This is something that will very likely have major ramifications on our industry at all levels today. Let’s explore the role of AI in society and our profession and how you as a new attorney can be best prepared to engage with it as you start your career. This is the A law student podcast. Great to be back for another episode of the A Law Student podcast. Today we have Chay Rodriguez and Leah Haberman both here on the podcast today. Hi Chay. Hi Leah. How are you? Doing
Leah Haberman (01:07):
Well,
Chay Rodriguez (01:07):
Doing good. How are you?
Todd Berger (01:09):
Great. So Leah, really important, interesting topic that’s been in the news, a lot of different areas and fields that it may impact. Really excited to hear about this topic today in the context of the law and legal education. Can you tell us a little bit about it?
Leah Haberman (01:26):
Yeah, so today we’re going to be talking about artificial intelligence and algorithms and all these things that personally I don’t really know anything about. And so I was like, this is probably a great topic because apparently hiding from technology doesn’t make it go away. And one of the things I learned in my professional ethics course is that as lawyers you have to stay up to date with technologies. As a lawyer, you’re not allowed to say, actually I will not use a computer. Part of what we owe to our clients is that we know what’s going on in the world. And so I had the thinking of, well, if artificial intelligence and all these things aren’t going anywhere, we should probably learn how to use them because if we don’t learn it, we’ll just keep doubling down on how we’re already doing things. And to me that feels really scary because a lot of my initial gut reactions about AI is negative. And so I wanted to have us talk about it, whether if it’s going to be here, how do we use it?
Todd Berger (02:18):
Chay, are you also potentially afraid of it? Like Leah?
Chay Rodriguez (02:21):
I don’t think I’m afraid of it just because I kind of in a very, very low level, I feel like AI is the evolution of MapQuest to the actual map on my phone getting me places. So I embrace it there. But I will say, funny enough, and before anyone listening judges me, you miss a hundred percent of the shots you don’t take. I threw out there, so one of my elective professors, the possibility of a take home exam and he’s known for take home exams, so I was like, I just knew I had it. And he was like, no, I already turned it in. It’s kind of hard with this because you chat GPT and I’m like Chat GPT. It literally never crossed my mind that it could be used for evil in that way. So I was highly upset. I was highly upset. But yeah, so AI so far chat GPT one Chay zero.
Todd Berger (03:17):
Before we get into the interview, I’ll tell you I am very afraid of Chat GPT in part because I don’t understand it and I think lots of times we’re afraid of technology we don’t understand, but I had a student ask me a question in evidence class a couple of weeks ago, and of course it was about the admissibility of a particular piece of evidence and how would we determine that? And I had answered the student and I give the best answer I could. I said, I think that the judge would determine that under federal rule of evidence 1 0 4 A, which gives the judge the ability to make preliminary determinations about the admissibility of evidence when it concerns sort of legal questions. I was very proud of myself. And then the students said to me, well, that’s what chat GPT said too. So I felt like now I have to compete with a super computer.
(04:05):
And so I remained somewhat afraid of it, but I think for me that raised a lot of questions about the role of chat GPT and making legal decisions talk about not just in the trial advocacy space, but maybe answering different questions or legal conundrums that are posed. Students are going to it. I imagine lawyers are going to go to it. So I remain afraid. I want to listen to your interview with the professor and see if I’m going to be less afraid of it as we move on. And I’m sure there’s a lot of great things to get into.
Leah Haberman (04:33):
Definitely. And I think a lot of our listeners who are skeptical or afraid or maybe as optimistic as Chay is when it comes to AI will still get something out of the interview with Professor Lobel just because she is such an expert in this field. She’s currently a professor at University of San Diego where she focuses on employment law and discrimination in that space. But she has talked all over the world, all over the US about artificial intelligence, discrimination, economics, technology. Honestly, it’s hard to think about what she doesn’t talk about, but one of the reasons I was most excited to talk to her is because of the books that she has authored. She wrote, talent Wants to Be Free. She also wrote, you Don’t Own Me. Which for anyone who’s interested in Barbie right now, a great read, but then she also wrote The Equality Machine. That book is what led me to her of it’s all the ways that AI can be used as an anti-discrimination tool. So I was really excited to talk to professor and I hope our listeners are excited to hear from her too.
Orly Lobel (05:39):
I actually think that we’re at a moment where there’s this democratization of the tools that are being presented to us and precisely a tool like Chat, GPT is one that really doesn’t require you to be a programmer. So the perspective that I bring in my research and my teaching and my recent book is that AI is here. Algorithms are changing and shaping our lives in really every aspect, the way that we work, the way that we play, the way that we take care of ourselves and others, the way that we date and find love, whatever, wherever you’re looking. And we really shouldn’t have this conversation of whether it’s good or bad because the answer’s always going to be complex and it’s always going to be both the conversations that we should be having is how do we harness it for good and to better sustain our own goals.
Leah Haberman (06:38):
I first heard about your book The Equality Machine in one of my classes in law school, Anti-Discrimination Law. The way this conversation was situated for us as students is we have these two extremes of algorithms and technology just reflecting human bias because they are made by humans. So how could they be drastically that different? On the other side of the equation, AI fixes all our problems because it takes out human bias. And so I’m hoping our conversation today shows what you’re saying that we kind of have to play somewhere in the middle and how we harness it to as an anti-discrimination tool. And so maybe you can use an example in that context to illustrate to listeners what are we talking about when we say an algorithm can either prevent discrimination or cause discrimination?
Orly Lobel (07:23):
Yeah, so absolutely I think that’s the right positioning. So as humans, we are so prone to cognitive biases to identity biases. Even people who most of us believe that we were bias-free or we very much believe in equality and diversity and inclusion. It turns out that we have so much unconscious biases. And so for me it was really one of the motivations to study what’s happening in introducing algorithms and hiring apps that will sort through resumes and produce recommendations of who gets hired or looking at pay gaps, racial and gender wage and salary gaps like you presented. There are better ways and there are worse ways to introduce an algorithm. So first of all, a very common flaw or fail of algorithms that was I think even more pervasive in the past. And now computer scientists are very aware of this because we’ve been talking a lot about algorithmic bias has been to feed the algorithm partial or skew data.
(08:39):
Another way that it fails is if you direct in the hiring context, again, it was kind of very much reported as one of the ways that a hiring app fails is that if you direct the algorithm to just replicate what we’ve done in the past. So if Amazon, for example, tells an AI just hire similar people to those who’ve succeeded in the past to get through the door to get promoted, then yeah, it will reflect the past biases that we’ve had in our society. At the same time, there’ve been huge leaps in machine learning and in directing an algorithm to constantly also check for diversity and inclusion. And in hiring apps, it’s been very much the cutting edge advancements where first of all, as a programmer, you don’t direct the algorithm to just look at the past, but actually you tell them, look at undervalued talent, look at expanding the pool of applicants.
(09:46):
Try to figure out what creates more likelihood that applicants will even consider applying to a job and using machine learning to sort that out. So for example, it turns out that even the way that you present an ad, a job ad, regardless of its content, but just like if you are drafting it in a way that has bullet points versus if it’s in full prose of the text is kind of more in full sentences that will have differences in whether women are more likely to apply or less likely. So women, we are not aware of this but on our own, but it turns out that they find it off-putting when a job is just in bullet points in the description. So these are things that machine learning really can help us see things that the naked eye doesn’t see. And then the kind of next step is to constantly have a second algorithm that looks for the disparities in what the outputs are in the sorting. So again, what I have been advocating for a while now, and I think this is happening with Fortune 500 companies, is constantly asking about the comparative advantage of these sorting algorithms rather than demanding that they be perfect. So sure there’s going to be some flaws, some failures, some disparities in the outputs of who gets hired or who gets promoted when you’re using an algorithm. But that of course is very true in the status quo in the human decision making. So what we need to constantly ask is the algorithm outperforming our human minds.
Leah Haberman (11:30):
And I guess something else that you were talking about, the doubling down on the status quo concern that people have about algorithms, like you were saying that it can, if it’s only reflecting on the past, it can kind of miss a lot of things. They can’t really pick up on systemic injustice. Maybe we can flush out the alternative to that a bit more because worried about the proxies that they could use of, oh, let’s look for credit history, but who tends to be discriminated against in that sense? How do we kind of rectify that or housing, let’s not just use zip codes as a predictor for educational success and job success. So how do we get around the proxy issue?
Orly Lobel (12:09):
Right. So that’s absolutely correct that it’s very difficult to blind an algorithm these days. It mines through so much more data that we can process as humans and there will always be those proxies that it can really figure out through all kinds of connections that we can’t really figure out what is the identity. And that’s why I said before that really the focus shouldn’t be on the inputs trying to blind it to certain data to minimize data. I’ve been actually quite critical of some of the kind of EU GDPR regulations that really focus on data minimization rather than focusing on the outputs and what are we are actually doing. So I think that we can all agree that knowing more has been the story of human progress. If you think about health, in my book the Equality Machine, I have a whole chapter about health and how our clinical trials have so much been skewed on looking just at a very narrow category of demographics.
(13:16):
So mostly again about white men, a lot of our criminal car trials in the past have been, and so we actually should use AI in all this power to collect more data and to know more about our biology, our psychology, our inequalities, that the root causes of financial and social inequities and disparities in investment in urban planning and whatever it is. And once we know more, we really then need to focus on what do we do with it? Where do we invest? So again, thinking about AI as more of a tool, Leah, you use that term a couple of times and that’s I think the most important kind of message of AI is just a tool. It won’t give you for free something that you don’t ask. For example, fairness and equity. It’ll just reflect back to you whatever it is that it’s seeing. And so I think about it as kind of a mirror where it can show us where the problems are, it can help us redirect resources. It’s not going to solve all the problems and we shouldn’t expect it to solve all the problems, but it’s really there to make us stronger and more committed to our ongoing normative values.
Leah Haberman (14:33):
No, I think that makes a lot of sense.
Speaker 5 (14:41):
We will be right back after this.
Todd Berger (15:56):
Welcome back, Leah, Chay, what parts of that conversation really stood out to you? Were there aspects of it that you heard that you said, I didn’t know about that in AI, or I thought about AI differently after we discussed?
Leah Haberman (16:11):
I think for me it felt nice to be reminded that algorithms in ai, they’re like tools. The way we use them is so essential and our goals of it. And she often talked about pay more attention to the outcomes and the outputs than the inputs that you can’t give as little information as possible and hope that it gives you good outcomes. And I just had never thought about that as much of like, okay, how do we just focus on using it as this effective tool and then a tool is not inherently good or bad. And I think that reframing of it felt really helpful for me.
Chay Rodriguez (16:44):
Yeah, I agree. I think a really big plus in that conversation is the tool piece and using it to standardize certain things. So Leah, you talked about just the bond process and saying, okay, I’m going to grant this person bond or I’m not going to grant this person bond, and I know right now in our country, and you see it, I think lawyers and law students alike all over the country are looking at different cases and they’re seeing how some people are charged with the same things and because of their power in this country or race or just the sheer madness that it would be if they were kind of held in some pretrial detention center versus other people who maybe don’t have that much power maybe of a different race, maybe not a part of the government per se. They have bond trial after bond trial after bond trial, tried for the same things and they’re unable to get out.
(17:43):
So it made me wonder if we had an AI program that could help standardize and say, this person can be released versus this person cannot be released and articulate why to the greater public who’s not as well versed in the reasons behind why certain people get a bail and certain people do not. It also kind of takes, I don’t know, that wall that’s between those who kind of knows what happens in the court system and those who don’t. We could say, well, we rely on this system and it’s A, B and C and that’s just how it goes. So that was something that kind of piqued my interest a little bit just because I know it’s a thing that so many people are talking about and thinking about and it’s a question that some people are using to kind of color their law school experience as well.
Leah Haberman (18:30):
Absolutely, and I think when it comes to the bail hearing stuff and who gets bonded and who doesn’t, there’s a lot of criticisms. I think the program’s called Compass, and this is what I brought up with the professor of it just finds work arounds for discrimination of in the algorithm. They’re not going to say, black people don’t get released on bail. They’ll say, people who don’t meet a certain income threshold, they’re going to say people who don’t have four people in their homes who are making a salary over X, y, and Z amount. And the outcomes often can be the same and just as discriminatory in some ways, but they’re hiding it. So I think your point of if there’s a way for us to take the way the algorithms use and be transparent about it and force the people creating them to be like, no, you can’t just find these workarounds to discriminate the same way, but we’re going to be looking at how you do it.
(19:15):
And something that the professor brought up is having those two different algorithms getting to the same result. So it’s not just one algorithm determining who gets bail and who doesn’t. It’s two different kind of mindsets of in the same way a lot of people view juries as better than a judge, you just have more heads in it. So I think that same kind of instinct to your point of the transparency and the uniformity is great, and if there’s a way to do it that doesn’t just find the workarounds, I think that’s a best of both world situation
Chay Rodriguez (19:45):
And it takes a lot off of the judge, I would think right now they no longer have to answer these questions in the back of their minds, am I doing the right thing? I know that this isn’t right or whatever the case, it takes that away from them. I think that can only be helpful and only help them to go into their position from a different standpoint.
Leah Haberman (20:06):
I love the idea that judges are asking whether they’re doing the right thing.
Chay Rodriguez (20:11):
I would think that they have a conversation with themselves while they’re in their chambers and tle toddle back and forth between their decisions of the day. I mean, we all got to wind down sometimes. I hope so.
Todd Berger (20:24):
So Leah, when we get back into our conversation with Professor Lobel, what are we about to hear?
Leah Haberman (20:29):
So in this second part of the episode, it’ll kind of be more law school specific and law student specific of, okay, we’ve talked about this framework of what is ai, how do these algorithms work? But then it’s a kind of, that’s interesting for everyone, but this is the law student podcast. What is our here? And so I wanted to make sure that I talked with Professor Lobel of if it’s going to be a tool, how do as a future lawyer and everyone listening use it
Speaker 5 (20:56):
Sounds great.
Leah Haberman (21:07):
There’s a reluctance in the legal world to learn new tricks. We love star decisis. We love the past. That is how we’ve built our legal system. And so I’m hoping that our conversation can start equipping our listeners as future lawyers on how do they use these tools. I think as I started the conversation saying, I’m intimidated when someone says the word data. I say, oh, I don’t know that. That’s not my wheelhouse. I don’t even know how to approach the conversation. So I think for me, the words that stick out when I even hear the conversations about AI and algorithms is whether people’s branding it as systemically racist as I hear the branding of how people refer to it or it’s doubling down on bias, it’s perpetuating the status quo. And so I wonder if you could articulate what are the ways as users and as people who engage with, if someone were to approach you in a case with an algorithm and said, we just showed our housing listings to people who met this search in area media and income. How is a lawyer, a housing discrimination lawyer, could you start even begin to understand data and algorithms as a first step?
Orly Lobel (22:20):
Yeah, I would think about it as very similar to how all successful attorneys all the time engage with science and technology in all sorts of ways. So you are always bringing in expert witnesses who are statisticians or are physicians or if you’re doing IP stuff, you’re bringing in chemists and biologists and you’re interacting with science, but you’re really making sure that you have a level of understanding of asking the right questions, demanding the information, and looking to do these regulatory agencies that really are now starting to make more sense of it themselves. But to the bigger point, this is here. And I think at some point it will really become, I would say, even malpractice to not deploy some of these tools in your internal working. So eDiscovery is so much an important tool now when you have big cases, and it’s just like it makes the life of attorneys so much easier to use these tools of looking through thousands of documents and a speed that no human can do really flagging the words that we were looking for and prioritizing the documents. Now it would be like an attorney saying, oh, I don’t read email, I just write everything by hand and I don’t own a computer. Now, this would be unthinkable now. But that happened in the past. There were these questions of should attorneys shift to computers and what will happen with privacy and cybersecurity if everything’s digitized and we’ve solved those problems. So I think that’s where we’re heading, that these tools are being built for attorneys that are not trained in computer science and you just kind of need to jump in and learn about it.
Leah Haberman (24:25):
And to that point of jumping in and learning about it, I guess one of the questions I have is we use this word algorithm as this blanket term, but I guess I’m wondering, are all algorithms created equal in the sense of if I am working on a case and a bail hearing and the judge is using Compass to determine the likelihood of them breaking bail, how do I start analyzing Compass versus this software that’s being used for the housing hypothetical I gave a few minutes ago as a attorney wanting to be like, okay, I’m okay with algorithms, but am I okay with this specific algorithm?
Orly Lobel (25:02):
Right. Yeah. So you’re absolutely right that algorithms can mean a lot of different things. And it’s most simple, straightforward, meaning it just means a formula and actually encompasses a pretty simple formula. It has several factors that are weighed for sentencing. What are the levels of crime and the pasta record and different mitigating factors, but really pretty limited. And that’s different from machine learning and kind of a learning algorithm that is not really told how to process the information. It’s just given a lot of information and finds patterns and kind of spits out whatever, for example is just saying, find the best candidate or find the best applicant for housing, and you’re not giving them the factors, and it just kind of figures it out. And that becomes certainly more complicated for us to challenge and to understand, and maybe this is comforting for you and I as attorneys to know that even if you did have a CSS degree, you probably wouldn’t be able to figure this out.
(26:22):
So it’s not that the programmers actually know it’s because it’s so with so many different layers and so much data mining that the very strength and power of its computational abilities is what sets it as different and better than the human mind. So you can’t really figure out what it’s doing during the processing, but what you can do then is really look again at the outcomes and find the disparities. And really right now the gold standard is to have two algorithms in those cases where one is kind of the suggesting algorithm that’s doing the sorting and creating these outputs, and then the second one that’s constantly auditing for equity across demographics that you care about. So again, it’s kind of by design and asking for the right tools to help you sort through that
Leah Haberman (27:16):
As you’re talking about focusing on the outputs rather than the inputs. I think that seems to kind of get at one of the main criticisms I’ve heard of AI is like, well, the inputs are going to be reflective of our society, which is deeply racist, deeply homophobic, all these very entrenched systems. And I guess though focusing on the outputs, it almost brings up the conversations around affirmative action that have been going on of if we’re looking at the outputs to be different than the status quo, then all of a sudden it almost seems like we are asking the algorithms to focus on giving more opportunity and giving maybe special factors within the algorithm to marginalized communities. Is this, I guess a worry in your research that people all of a sudden be like, actually you’re not allowed to give any positive, I’ll say points, I don’t really understand how it would work in an algorithm in an applicant who comes from a marginalized background?
Orly Lobel (28:11):
That’s a hard and very good question, very incisive because I have been critical of the way that we’ve been thinking about these issues and calling them affirmative action as an unlawful practice, whether it’s in employment or education or housing. I think that we are at a moment where because we should be focusing on outputs and because we have much more of these capabilities of looking at minute differences and how they matter, we have this omni potent tool to actually correct for a lot of our historical wrongs, we should allow more of that kind of preferences or just a stance toward more inclusion and diversification. I would argue this goes into these legal questions in the different context, but in employment law, which is my core research and teaching area, I’ve argued that we shouldn’t understand these kind of sorting of preferring women in hiring as unlawful affirmative action, but it’s actually correcting for disparities that have been unlawful. So again, it goes into these questions of where we are with our Supreme Court that has prohibited some of these practices, but I would very much encourage everybody to think about this, to think about your question, and to challenge some of the recent cases that have made it more difficult to present these kind of positive, more substantive, fuller, richer ideas of what equality can mean and look like and kind of collapsing it into an idea of the illegality of affirmative action.
Leah Haberman (30:08):
Absolutely, and I guess on that note, what our listeners should be thinking about and how they grapple with these questions because this podcast is half an hour and they can spend hours doing their own thinking about these things. You work with law students every day. So to kind of close out our interview today, what would you say to law students who are like, I feel really daunted. I hear a few podcasts that AI is good, I hear some that they’re bad. How do I go about as a future lawyer who will be entrusted with people’s lives, how do I figure out this whole algorithmic justice thing and where do I start learning?
Orly Lobel (30:47):
Yeah, I think just playing with a lot of the different tools that are out there, A lot of them are free as chat, GPT is free and there are all these competitors. I mean, competition in this market is another big point. We didn’t really have a chance to talk about that too much, but it’s been an area of my research also with my previous books, just make sure that the market is not concentrated and we have consumer choice. And I think that also increases algorithmic trust and just trust in the market and the tools. But using these tools for your own purposes, understanding the language, what is hallucination, especially when you’re in law school, all these competitors like Lexus, OLAW and others, they offer you all their digital services free. It’s a great time to play with them to learn about your professor’s policies about using JGPT and others and do research, do legal research. But I think more and more faculty are understanding that we actually want our students to know about these tools and to learn about them so that they go into law firms feeling more empowered and most important, having an informed conversation, not just accepting in a binary way of like, oh yeah, it’s all biased or it’s all great.
Leah Haberman (32:18):
Definitely I think hopefully if law school teaches anything is that to work around a binary, you can argue anything from any side. So
(32:27):
As someone who is going into public interest work, I never imagined having a conversation about tech. It just felt so removed from my areas of public interest work. But after reading your book and having this conversation, I just feel very grateful to you professor of recontextualizing these things as tools for justice. And rather than keeping things behind the gates of the legal profession, which I think everyone has experienced being very ostracizing to most people in most identities, that if there’s a way to use these scary tools to actually make the law and justice more accessible, then we can’t just plug our fingers in our ears and say, actually, I don’t want to learn about these tools. So I really appreciate our conversation today.
Orly Lobel (33:12):
I love that. Yeah, that’s really the right way to think about it and I love that we got to chat about it. Thank you.
Leah Haberman (33:20):
Thank you so much.
Todd Berger (33:25):
We’ll be right back.
(34:27):
(34:27):
(34:27):
So Leah a terrific interview with Professor Lobel. I certainly learned a lot. I’m sure our listeners did. Also, when you think about the future of ai, what do you think the future of AI is for the legal profession?
Leah Haberman (34:41):
I mean, that’s a big question, and I think I didn’t leave the conversation with a specific vision of, okay, here’s where I want to plug and chug AI and where I think it works and doesn’t work. But where I kind of left the conversation was not maybe my full stop red lights. This is bad mentality that I went into. I mean actually getting to know and play with chat GPT is an essential skill for lawyering of something I never thought I would need it for. But that can be super helpful is translating legalese into more kind of a human language. And I was like, well, how can a robot help me translate? But chat GPT, you can say for a specific audience. So if I want to be a lawyer working with children and I want to communicate these complex things that barely make sense to me to a nine-year-old chat, GPT is a tool for me to use that. And I just never would’ve given that credence or I would never have had that idea before the conversation with Professor Lobel of like, you can use it for whatever you want. So I think in terms of the future of the legal profession, it’s lawyers will find it as a tool that works for them for writing their contracts, for talking to the clients. Since there’s a million ways to be a lawyer, there’s going to be a million ways to have chat GPT and AI and all of that be helpful in lawyering.
Todd Berger (35:54):
Chay, what do you think? What’s the future of AI and our profession?
Chay Rodriguez (35:58):
I don’t know. I hope it doesn’t make us lazy. I think I’m thinking about a previous point that Leah made and it’s like if we’re able to just feed these facts into a nice user interface and get back some sort of final decision, does that take away from the art of watching a trial lawyer really speak to a jury? Does that take away from me currently as a student being able to have that full well-rounded education and then future law students who I’m sure 10, 20 years from now, that program will actually exist? They won’t even need to see that, but I think of course it could be used in different ways. Leah, I think that’s an amazing point of being able to use it to talk to children, to talk to people who may not have the same education that you have or that may not be exposed to the same concepts in life that you’ve been exposed to that enables you to understand different things. Like if you can use AI in some way, shape, or form to be that connector between you, I think that’s amazing as well.
Todd Berger (37:10):
Do you think AI could eliminate some of the human components of the lawyering process in a way that poses barriers to those kinds of connections as well? Just thinking about if I’m going to communicate with a client, and I’ve done it a million different times, and I’ve probably started off, I interviewed some clients, sometimes I said some things or had a different approach and it was better or it was worse, and then I had to be self-reflective. And then over time, client interviewing is an art, teach it in law school lawyers kind of master it. It’s really the first key stage and how you might build a relationship with the client, which ultimately can help determine the outcome of the case. If I’m sort of filtering things through a computer each time, am I dulling in a lot of ways the refinement of my human senses, that is an essential part of being a lawyer
Leah Haberman (38:00):
Types into chat GPT. Should I take this plea? I don’t know if that feels good, but then I remember something that the professor said is that it’s not about it getting to a perfect state. It’s about is it better than where we’re currently at? Right. And I think in New York, it’s like 90 plus of trials get played out. They don’t go to trial. And so maybe the way people are phrasing these plea deals is trial is so scary. Sentencing in prison is so scary, which all of that is true, but maybe there’s a way we could be phrasing things in a way where people do take advantage of trial opportunities and we don’t see it be that 90% number and maybe prison populations go down because juries are actually being presented with things and it’s not all prosecutor control. And so it’s hard. I think my gut reaction is like, yeah, everything you’re talking about of the human connection, absolutely.
(38:53):
Is it about maintaining the status quo of are we talking to our clients in the way we’ve always talked to them and is there a better way we could do that, utilizes some sort of algorithm instead of replacing it? At least that was my takeaway. I think I was in the same boat as Utah of I want to be able to charm a jury. I want to be able to find the juror who loves baseball. And then we talk about how much the defendant played baseball as a kid, and you can’t make an emotional empathetic connection with an algorithm. But then I’m like, but there’s also going to be that person on the jury who hates everyone who cares about baseball, and my client doesn’t stand a chance because they like baseball. And so when you have both of those extremes playing out real time right now in juries, I don’t know where the algorithm comes in between those huge spectrums.
Chay Rodriguez (39:39):
And I think another thing that we could also look at is the fact that the lawyers aren’t going to be the only ones with access to chat GPT. So if your client also has access to chat GPT, what’s to say that they’re not going to use it in the same way that your student did, Todd, to challenge you as their advocate to say, okay, but I also read about this tactic as well. If a prosecutor throws life on the table to pressure someone into a deal and you are like, do you want to take it? And then they come back at you with something that they’ve been able to pull from some AI program that could help with their legal strategy, even if they don’t know it, they might not even know how to say it, but they bring it to you and you’re like, okay, well if you want to try it, if it’s another way to help them to get information, is that also a plus to our industry? Because it challenges us as advocates.
Leah Haberman (40:40):
I think something you’re saying, Chay, that makes me think of is also the efficiency argument too, of if it’s not all on the lawyer to think of every case that’s ever existed. Obviously there’s examples where Chachi BT makes up cases, so you still have to do your due diligence, but you could probably work a lot faster when you’re utilizing these tools. And one of the big problems for a lot of public defender organizations is that they have huge caseloads and not enough attorneys. So if you can maintain the human connection, but be more efficient doing it so you have more time doing the human connection part, I think that is to all the points we’re saying of it’s a tool, it’s using that tool effectively. And maybe I feel like I’ve really changed my toot on this. I was like, it’s so scary, let’s not use it. I’m like, well, now we have to use it for good. But I think it takes learning it to get to that point. And you can’t learn something like you were saying earlier, Todd, that you’re afraid of.
Todd Berger (41:34):
Yeah. Well, to be honest with you, it was a terrific interview and I did learn a lot, and I am less afraid than I was before, but I’m still afraid. I refuse to accept that this computer will replace me. I do think that there have to be some limits to it, but I will also say this, I’ve made many arguments in front of judges that I knew were right, and the judge was like, no, you’re wrong. And I’m like, no, you’re wrong. And I think there’s an objective answer to it, right? No, that is hearsay, right? Or whatever it is. And so I do wonder how much judges are going to say, well, wait a minute, what’s the law on this? And what rule applies here? Maybe there’s a way to have more accurate legal decisions as well because the computer might be able to figure it out in a way that isn’t susceptible to kinds of human errors. I’m still on the fence. I certainly learned a lot, but I’m still on the fence. Solia, thanks for an amazing interview with Professor Lobel. It’s great to have her on the podcast, and it was really interesting. Listen, so thank you again. Terrific job.
Leah Haberman (42:38):
Thank you. And thank you Todd and Chay for the great conversations about the episode. And I feel like that just made it 10 times better to kind of be able to reengage with this material. And I also want to just give a big, big thank you to Professor Lobel who gave us her time and was able to really kind of explain all these complicated things in a way that felt very accessible and fun sparks to future conversations.
Todd Berger (43:02):
Before we go, if you haven’t yet, be sure to subscribe to the show on your Apple podcast, Spotify or your favorite podcast player. Be sure to share our podcast with your friends and fellow students. We want to hear from you, so send us your thoughts about the show or issues you’d like to hear about in a review. We at the A lawsuit and podcast would like to express our thanks to our production team at the Legal Talk Network and the professionals at the ABA Law Student Division.
Notify me when there’s a new episode!
ABA Law Student Podcast |
Presented by the American Bar Association's Law Student Division, the ABA Law Student Podcast covers issues that affect law students and recent grads.