Oliver Roberts is the editor-in-chief of AI and the Law at the National Law Review. He is also the...
Victor Li is the legal affairs writer for the ABA Journal. Previously he was a reporter for...
Published: | January 15, 2025 |
Podcast: | ABA Journal: Legal Rebels |
Category: | Legal Technology |
If last year was when generative artificial intelligence went mainstream, could this year be the one when it gets even bigger? Will we see more people, law firms, companies and government agencies adopt, use or integrate it into their day-to-day activities? Will we see more rules and regulations from states or the federal government regarding its use? What about law schools and generative AI? Will we see more start to teach it? And will lawyers finally learn more about what to do, or not to do, when it comes to using it?
Special thanks to our sponsor ABA Journal.
Announcer:
Welcome to the ABA Journal Legal Rebels podcast where we talk to men and women who are remaking the legal profession, changing the way the law is practiced, and setting standards that will guide us into the future.
Victor Li:
Welcome to 2025. We at the ABA Journal, Legal Rebels Podcast are not usually in the business of making prognostications. Let’s face it, if we were, we’d be playing the lottery or putting bets on sports games in the stock market and not doing this. But it seems like a safe prediction to say that generative AI will continue to dominate 2025 just like it did last year. And if last year was when Gen AI went mainstream, could this year be the one when it gets even bigger, will we see more individuals, firms, companies and government agencies adopt, use or integrate it into their day-to-day activities? Will we see more rules and regulations from states of the federal government regarding its use? And how will gen AI be integrated more into law schools if it will be? Those are just a few of the things we’ll be talking about on today’s forward looking episode of the Legal Rebels Podcast. My name is Victor Liand I’m assistant managing editor of the a B Journal. My guest today is Oliver Roberts. Oliver is editor in chief of AI in the Law at the National Law Review, the co-chair of the AI practice Group at Holtzman Vogel, and the CEO co-founder of wicker.ai, a legal AI startup in education and training. He’s joining me today to talk about what we might be able to expect in the year 2025 when it comes to AI and the law. And as always, all predictions are good or your money back, right? All.
Oliver Roberts:
Exactly. Exactly.
Victor Li:
So welcome to the show and thanks for joining me.
Oliver Roberts:
Thanks, Victor. Thanks for having me. You all have been doing great work in this space for over a year now since the adv of Gen ai, so very excited to be here today.
Victor Li:
Yeah, no, you as well. And so obviously I gave a very quick version of your bio. Can you talk a little bit about yourself and how you got to where you’re today?
Oliver Roberts:
Yeah, definitely. So I graduated law school about three and a half years ago now. I previously worked at two bigger firms, Skadden and Jones Day, and in between founded a tech startup and since then, made my way over to Holzman Vogel where I’m co-head of the AI practice group and also started a legal tech platform myself, wicked a couple months ago. And I’m also now guest lecturing at law schools. Just finished up teaching Washington University in St. Louis’s, inaugural AI in the practice of law program. So that’s a little bit about where I’ve been and what I’ve done, and really excited to talk about AI today.
Victor Li:
Yeah, me too. And actually before we start, because one thing that I’ve always been kind interested in is sort of how young lawyers, especially associates like yourself, can kind of distinguish themselves or get ahead of the competition, so to speak, when it comes to taking a ride on the technology train and learning about latest trends and that kind of stuff. Was that something that was conscious on your mind when you came into this area or was it something that you’ve always been interested in and at a time it just worked out for you?
Oliver Roberts:
It’s something I’ve always been interested in, but I think the boom of generative ai and of course AI has been around for decades dating back to the 1950s, sixties and has been part of legal practice for decades now. But with the advent of Chachi bt, I think it presented new opportunities being how widely available it is. It’s free, it’s now being integrated in ways where it’s generating new text and it’s far more advanced. And I think this has presented younger attorneys with a lot of opportunity because this is new because it’s bringing new legal issues and raising new questions. And because of how fast moving the tech is, younger attorneys who keep their heads to the ground, follow the updates, a lot of work to do it. But if you’re willing to put that work in, it could create a lot of opportunities. That’s why on my end, on the younger side as an attorney yet, I’ve had the opportunity to start the AI practice group by firm because I put in that time to just follow the updates on the legal side, on the tech side. So there’s a huge opportunity for younger attorneys to get ahead.
Victor Li:
Yeah, I mean, we just did a story a couple months ago, or I guess yeah, last month we looked at the number of applications to law school just through the roof. I mean people taking the lsat, people applying to law school was just all higher than normal. And I think the thought in my head was just like, okay, well at some point all these future jds so to speak, maybe they won’t all start practicing law. Some of them will leave that area, but at some point they’re going to need to differentiate themselves and learn some kind of skills or kind of develop some kind of abilities to kind of differentiate themselves from the pack of just all these competing lawyers or whatnot. And AI seems like a great way to do it just because it’s cutting edge. Things are constantly changing and so many people are still not really understanding what it’s capable of. So if you can make yourself into an expert, then that would really help you, right?
Oliver Roberts:
Yeah, definitely. And especially a lot of these schools are now doing innovative things, launching AI in the law labs. Vanderbilt has one, Miami has one, a couple other schools. I’m talking to plan to launch ’em soon. So I do think a lot of these schools will be investing a lot more in their AI development, which in turn, if you’re a student applying to law school, having that niche interest in AI could certainly help you stand out from the pack.
Victor Li:
So let’s get right to it then. So obviously we’re talking about AI the last couple of years. It seems like so much has happened, so much has happened very quickly. The level of adoption of gen AI tools in the legal industry has been much faster than even a lot of us thought it would when we first heard about these tools. Do you see that happening more, even more this year, or do you think maybe some of the initial enthusiasm has plateaued a little bit and maybe we’ll see a little bit of a reaction this year?
Oliver Roberts:
Yeah, so kind of taking it back to what I refer to as the boom of generative ai, gen AI itself has been around for quite a while, but when Chachi BT came onto the scene in November, 2022, I think in 2023, we saw a lot of skepticism. That’s when we first saw those cases of lawyers filing hallucinated cases with the court leading to sanctions, different discipline and those embarrassing headlines. And then statistics from the Clio report from October. A lot of people cite this one where just in 2023, only 19% of lawyers, illegal professionals were using ai. Whereas this past year in 2024, that’s no up to 79%. That’s a widely cited statistic, but I think it might be a little overstated. If you look at the numbers on that 79%, they include someone who uses AI once a week, which could just be using chat GT for some administrative thing.
So I still don’t think we have a clear insight into how many law firms are really integrating it in the day-to-day. But I could say anecdotally, there’s a lot of enthusiasm. A lot of law firms recognize that these tools are not a hundred percent accurate, yet they still want to integrate them because they know they’re going to get better and they don’t want to be left behind once we reach perhaps an inflection point where they do increase in accuracy significantly. And I mean to talk about how AI is being used in legal practice, you have to break it up. AI has been used in e-discovery dating back to 2012, starting with technology assisted review. So it’s a lot more accepted in the e-discovery space with platforms like disco relativity, they have benchmarking on how accurate their AI is, and they’ve just recently implemented generative AI chatbots. So an E-discovery, pretty widely accepted accurate contract analysis, pretty similar, but stuff like drafting not there, not great. And lawyers recognize that it’s really only good for a starting draft.
Victor Li:
So in that vein, do you see the tools improving this year to the point that maybe it will be able to draft the contract or it will be able to maybe not hallucinate as much for people who use it for legal research, do you think or I think one of the big things that made us take notice was when it took theBar exam. I think when it first started, it was barely passing it when it first started, and now it’s acing it. I’m sure the numbers are in there somewhere. So do you see it kind progressing along those lines where it’s like it improves a lot in a very short amount of time?
Oliver Roberts:
I think it’s definitely improving, and I think it’s a two part inquiry on that point. The first one being how well these baseline frontier models are improving. So with that bar exam result, the passing rate that was using just the baseline GPT models, so you have the two part inquiry of one, how fast are these baseline frontier models like GPT, that being like chat, GPT, Claude Gemini, how quickly are those models, which are general purpose, are they improving because ultimately those serve as the foundation of the legal tech tools that are being used out there in practice, like these big companies, I won’t mention ’em by name, but virtually every one of them, they’re not building their own large language models. What they’re doing is taking APIs from OpenAI, so they’re using GPT-4, they’re using Claude APIs, they’re using Gemini APIs. So their foundation and how well that foundational model works will be predicated on how well Google makes advancements.
OpenAI makes advancements, and then on top of it, they make those models more fine tuned, introducing legal specific data. For example, spell book, I’ve talked to them, they’ve trained their models, which they take a baseline model, but GT four, they then fine tune it with hundreds of thousands of ideal contract templates so that their plugin, which is good for contract drafting now has fine tuned data to improve contract analysis. So it’s that two part inquiry, how fast are these frontier models going to improve? But then also how well are these legal tech tools going to fine tune their solutions? And I do think both of these, they’re going to keep improving just given the amount of money flowing into it. There’s going to be more data, larger models, and I do see improvements on both sides. I expect that that is
Victor Li:
Gotcha. And so another thing that we talk a lot about on this show and also just amongst ourselves is the effect that gen AI tools are going to have on lawyers. They’re being able to do their job more efficiently, being able to do their job at all. Obviously there’s a lot of hyperbole, a lot of people saying that, okay, well no, this isn’t a big deal. Or some people saying, okay, well this is a big deal. This is going to take away a lot of jobs. Where do you fit in on that argument? And do you see this year of putting additional pressure on lawyers and law firms to the extent that, okay, well we need to start using these tools or else another firm’s going to come in and just totally undercut us and then we have to figure out ways to use this tool so that we can be more efficient with the way we do our work?
Oliver Roberts:
Yeah, I mean, I think most of the major firms, they recognize the need to implement and integrate ai and some of it’s customer driven. I talked to a partner at our large law firm yesterday who’s actually a guest speaker in one of my classes, and he talked about it’s a bit of a 50 50 split. 50% of clients are saying, I want you to use AI so that you guys could be more efficient and drive down costs. Whereas other companies will say, do not use AI because they don’t want to have a situation where there’s data leakage or they run into privacy issues. So there’s still kind of a split in philosophy out there, but based on what I’ve seen, almost all the major law firms are adopting and integrating AI in some capacity. A lot of it’s just the fact that a lot of these legacy research databases like Lexi and Westlaw, they are integrating AI right in the platform. So it’s kind of inevitable it’s already there. So most of these law firms are using AI whether they enjoy it or not, but I think they do recognize the future is going to be AI and they can’t fall behind.
Victor Li:
Gotcha. And where do you fall in on the whole Gen AI is going to take away lawyer jobs and whatnot? I mean, do you think we’ll see some of that this year, or do you think that’s all just kind of overblown?
Oliver Roberts:
I think it eventually will happen, probably not this year. I think the accuracy rates of the legal tech tools is not there. I mean, for example, Westlaw, they rolled out personally used it, and it’s still not a hundred percent there. They also have a list of limitations. It’s a very long list of limitations on what the models could do right now. So in terms of the tech limitations, I don’t think we’re close to replacement. I do think that will change in the next couple of years. Perhaps at the five year mark, we could start seeing replacement because I mean, looking at it from the perspective of let’s say big law, when they’re hiring first year associates, the value out of a first year associate could be limited. And usually they’re good for a first draft of getting something to a senior associate who’s really going to refine it.
Then it gets to the partner. If these models can get to a point where they could do the work of a first year associate, that’s huge savings for a law firm and efficiency and cost. So I could see that replacement. And even society-wide, I mean, there was an article that came out from C NNN on Wednesday of this week, January 8th, saying that 41% of companies plan to reduce their workforce by 2030 because of ai. So there’s all this talk of saying AI is going to make people more efficient, it’s going to free up all this time for more high value creative work. But I don’t think we’re actually going to see that. I think it’s more of going to be replacement and figuring out re-skill what new jobs will be created. But I do expect there to be job displacement as a result of ai.
Victor Li:
So again, that kind of underscores the whole point that we made earlier about how people going to law school now or thinking about going to law school, they really should learn this stuff. Otherwise the jobs that they’re relying on right now might not be there when they graduate. Right?
Oliver Roberts:
Yeah. I think having an exploratory mindset kind of keeping the ear to the ground following these updates is going to be super important. I mean, we’re already seeing a new avenue of careers for lawyers open up. For example, Harvey, one of the biggest AI platforms, they’re very new, but they’re hiring big law, former big law associates to go to evaluate the outputs of Harvey’s model. And that’s like a new whole profession, and they pay kind of at the rates of big law. So it’s very interesting to see these new career opportunities open up at the intersection of AI and law where it’s only the quasi practice of law because the lawyers are actually just helping train these big models.
Victor Li:
Interesting. So let’s take a quick break for a word from our sponsor and we’ll continue our talk and we’re back. So in this segment, let’s talk a little bit about regulations and rules and regulations regarding AI potentially happening this year or not. So a lot of variables, anytime you deal with the government, I feel like you have to have those kind of variables in place. So can you talk a little bit about the regulatory landscape as it is right now and where you kind of see that heading in this coming year?
Oliver Roberts:
Yeah, so current regulatory landscape right now, there’s no comprehensive federal legislation out there that prohibits the use of AI or really regulates its use. So right now it’s kind of the wild west. And in that absence, we’ve seen states step up in terms of their regulatory activity. But to kind of touch on how the federal government has approached it generally, while there’s no legislation from Congress, president Biden did issue an executive order on ai. That was in October of 2023. And that did put some restrictions on the private sector. Generally, it just directed the federal agencies over 50 federal entities to take action on creating these AI plans, both internally but also for their respective sectors. And then as for private sector regulation, that executive order, it require things like these large dual use frontier models, dual use, meaning that they could be applied both for civilian benefit, but then also for potential nefarious in military and national security required.
These larger models to report to the government when they’re making significant changes are going to roll out big developments in their large language models to the public. Other things that regulated in the private sector included large cluster computing. So companies with large cluster computing capabilities, that being big GPUs, tpu, which is the hardware that powers large language models, these companies that own these large clusters, they have to report it to the government, the capabilities, the location of it. So it’s really the government trying to pinpoint who the major players are and what their capabilities are. And they also had restrictions on potential foreign uses of infrastructure as a service. So places like Amazon Web Services, they would’ve to report if foreign entities were using their cloud-based services to train large language models. So that was some of the regulation that came out of the executive order from President Biden, but President Trump on the campaign trail, and since his election has said that he’s going to get rid of that executive order, so starting in just a couple weeks, we could see a situation where there’s really nothing at the federal level, and that’s where the states step in.
And these states have been pretty active. Right now, Colorado with the Colorado AI Act is the only state to pass a comprehensive AI bill, and what it does is it regulates high risk uses of ai. But what I expect to see throughout the country in the next year is potentially more bills that take this risk-based approach to AI regulation. And what I mean by that is it regulates AI based on how risky it is. So if you’re using an AI system in the healthcare space, education, financial services, anything that could be considered sensitive or even areas that deliver certain constitutional rights to access, these would be regulated a lot more than you putting a generative AI chatbot on your shoe store or something like that where it’s low risk. Texas is one of the big ones, a representative down there, a republican, which is interesting to see a Republican in Texas taking the lead on AI regulation.
He introduced the bill that also regulates AI and creates a category for high risk ai and then another category for prohibited uses of ai. And I wrote an article in the Bloomberg law the other week criticizing it for many, many reasons, which we could get into. But yeah, that’s the current state, nothing at the federal level. I do not expect anything at the federal level in the next year with the one caveat, we could see a deep fake law get passed at the federal level. This has caught the interest of a lot of legislatures at the state level. I believe it’s 19 states now have passed laws restricting DeepFakes in elections. So I do think that’s something that legislators at the federal level could step in and regulate. But aside from that, I do not expect anything comprehensive at the federal level, but we could expect a lot more at the state level.
Victor Li:
Yeah, it seems like especially with, I mean federal, obviously it’s such a crapshoot at this point between changing administration, but also just with the way the parties are. But I mean, one thing that I’ve always been struck by, especially in the last couple of election cycles or whatnot, is that it seems like both parties are concerned about the potential of this technology, but they don’t necessarily agree on how to regulate it or what to do about it. But they’re both definitely, it’s not one of those stances where it’s like, oh, well we need to regulate this. Oh no, we don’t. It’s fine. They both kind of have suspicions and they both have concerns about this technology, but it seems like it’s also tied into a lot of other issues. I mean, you have people that are looking more at social media. You have some people looking more at the DeepFakes and the electioneering and things like that, and some people are looking more like Google and their business practices are, so there isn’t really sort of a uniform focus. Do you think that something might, like you said, kind of spur the parties to come together on an issue like this? Is it an issue that you think might get some kind of bipartisan cooperation, or do you think that things are so poisoned now that it’s nothing’s going to happen?
Oliver Roberts:
Yeah, I mean there is a good amount of agreement in broad strokes. And the best thing to look at as the indicator there, the house bipartisan task force on ai, they issued a report December 17th, so just a couple of weeks ago, and it was 253 pages of bipartisan agreement on certain findings and recommendations with respect to ai, some of the broad areas where they agree is just the fact that AI is vital for national security interests. AI is important in education. So some of the areas where I see bipartisan agreement are increasing investment in AI education in research and development, integrating it into the government. So it explicitly, this report explicitly says we should be using AI in different government agencies to improve efficiency. And I do think we’ll get some bipartisan support in national security space. China, just a couple of weeks ago, deeps seek is an AI lab in China, and they released their new model Deeps seek V three, and it is reported to perform just as well as GPT-4. So China now has capabilities in LLM development the same that the United States has. So that’s something that is now propping up China. And of course, we’re now in a economic war, potentially a Cold War with China. So I do see that as something that will raise national security concerns, and that’s where we could see that bipartisan support between Republicans and Democrats to ensure that the US remains a leader in LM development.
Victor Li:
Alright. And let’s face it, right? I mean Donald Trump, regardless how you feel about him, we do know that he is more than capable of changing his mind if he hears something or sees something that would cause it to or he starts listening to certain people instead of the people that he is now. How does that kind of play into preparing for how things are going to be this year knowing that you have someone that can be pretty erratic like that?
Oliver Roberts:
So I mean, I think the Trump approach to AI development is probably the right approach. I think the executive order from President Biden was quite an overreach and even questionable authority to start with because I do think this type of legislation should come from Congress rather than through an executive order regulating a nascent industry that’s so vital to the economy and also national security. So I do think it’s a good step to remove that executive order and promote more innovation and development. So I do think that’s the approach that would probably be best to keep the US as an economic leader and also a national security leader with the development of ai.
Victor Li:
Gotcha. And at the state level, are there things that you’re seeing from the states that you think could be a potential model for a federal
Oliver Roberts:
Law? Well, I see potentially legislators wanting to implement some iteration of this risk-based approach. The European Union was the first one to do this at a mass level. They created a very complex piece of legislation, the EU AI Act, and not only did it have four risk-based categories, it also created different categories for developers of ai, deployers of ai, people who are outside the EU who want to enter that market and sell AI products. So it was a very complex system and it created many different agencies and cross reporting requirements of other statutes. So I do not expect that to happen in the United States, but I could see legislators proposing something like that Colorado AI Act at the federal level, or that Texas responsible AI act like they had with high risk as well as prohibited categories. Personally, I’m not a fan of any of these high risk or risk-based approaches because essentially what they’re doing is they’re regulating hypothetical uses of ai.
And for example, this Texas AI bill, it outlaws the use of AI for social scoring. And whether you agree with social scoring or not, what it does is it just penalize the use of AI technology because any company could go just start a social scoring company without ai, and that’s totally permissible. So it’s not actually regulating the underlying societal harm of social scoring, it’s just regulating the use of AI technology to advance that initiative. So that’s why I think these risk-based approaches are fundamentally flawed, but I could see people legislators introducing this at the federal level.
Victor Li:
So before we continue, let’s take a quick break for a word from our sponsor, and we’re back. So for this last segment, let’s talk a little bit about some other issues related to generative AI starting with law schools. Now, this is something that quite a bit about. How is generative AI being integrated into law schools and what can law students expect to learn about it now?
Oliver Roberts:
Yeah, so AI is definitely being integrated into law schools. And for background, my startup is focused on providing AI education both to lawyers and law schools. So I’ve talked to probably 50 plus deans, professors heads of AI labs at law schools over the past three months. And the general trends that I’ve seen are averagely every law school understands the importance of AI right now, but a lot of these law schools don’t have the resources to take immediate action. But most law schools have now created AI task force or AI committees just to study the current environment of AI and legal practice and across other law schools to see how they can start taking action. And that’s manifested in the form of most law schools assigning this task to legal research and writing professors or the library at the law school. And a lot of these law schools are using Lexi AI law schools just got access to co-counsel.
So they now have access to these AI tools to integrate into practice. Some of the leading law schools right now I’d say are Vanderbilt. They have a law and AI lab where they’re actually building products using these hands-on tools. Miami law just launched another one. I know Stanford is very well advanced in legal tech. So there’s a very broad spectrum of how AI is getting integrated all the way from just, Hey, let’s use Lexus AI and our legal writing course, and that being the extent of AI exposure, all the way to hands-on clinics and labs where they’re actually building legal tech products.
Victor Li:
So in that regard then, if you’re a law student that’s either in law school now or heading to law school, do you think we’ll see a lot more of these schools start to adopt dedicated classes dealing with gen ai? Or do you think it’s going to mainly just still kind integrated into legal research and writing, and then maybe you get a little taste of it just in interacting with people and whatnot, but do you think it’s still going to be taking on a more central role with law schools, or do you think it’s still going to be confined to the periphery?
Oliver Roberts:
I think it’ll be a little bit of both. I think the leading law schools will have standalone courses in AI and law, gen, AI and law. So I do think we will see a lot more schools launch these dedicated classes and programs. I know Suffolk Law School, they’ve introduced a new legal tech LLM, there’s another law school working with directly to launch an actual AI in the law certificate program. And we’re talking about creating a whole degree program and tracked as well. So I do think we will see more formalized programs and classes, but at the same time, I think a lot of other schools that either lack resources or do not think this is a huge priority, we will still see at those schools AI getting integrated into legal research and writing, because right now it’s inevitable has Lexus AI search. And I do think that will be the standard for the practice of law in the future once the accuracy rates improve. And then co-counsel as well, that’s Westlaw’s AI powered assistant that’s also integrated right into the research database. So it’s pretty inevitable at this point. And pure legal research or writing professor and you’re teaching how to do legal research and writing and leaving out this vital component, whether you agree with it being efficient or not, you still need to teach students about it, in my opinion.
Victor Li:
And so kind of changing gears now, talking about some of the lawsuits that have been filed against various vendors like open AI and whatnot, copyright lawsuits for the way that they use material to train the ai. Do you think we’ll see more of these in the coming year?
Oliver Roberts:
I think it’s quite possible. I believe we’re at 37 lawsuits now and they’ve been just piling on. And I do think some people are sitting in the wing watching and waiting to see how these other lawsuits play out. If there’s a settlement or reaches a jury and it’s an adverse decision for these AI companies, I could see other publishers coming out of the weeds just to kind of piggyback off of that and file sued. So I do think we will see more. I know OpenAI, they tried to consolidate all this to multi-district litigation. I believe that’s still pending. So it’ll be very interesting to see how this plays out. There are huge implications not only for these AI companies, but also economic efficiency and national security because an adverse result for these AI companies could be pretty catastrophic.
Victor Li:
Yeah, I mean, I was going to ask you, I mean obviously without looking too far into the future, but I mean if there is an adverse result or a really bad jury verdict for them, could that possibly inhibit the adoption of some of these tools, especially in the legal sector, or do you think that this is going to happen anyway? This would just be a small bump in the road if something like that were to happen?
Oliver Roberts:
It’s tough to say. I mean, a full jury verdict in, let’s say just the New York Times case, there’s allegations that over a million New York Times articles were improperly infringed and willful copyright and violations could carry $150,000 fine per infringement. You’re looking at multi-billion dollar judgment potentially. Who knows if it’ll ever reach that point, but could be a possibility. And of course, OpenAI, they have billions of dollars coming in from Microsoft, their biggest investor and other investors. So potentially they could incur that, but it would set the precedent. And really I think one of the biggest implications of it, it would really hamper competition in the LM development market because these companies were able to do it by just scraping internet data without paying for it. So that drove the cost of developing these LMS down significantly. Whereas if you’re a new startup and there’s now this precedent that you have to license all this information to get it to start a new LLM and develop it, the cost is going to go through the roof to obtain that training data to compete.
So you’re really going to lock in these big tech companies. But at the same time, if you kneecap these tech companies and they get a huge jury judgment against them now where our leaders in LM development are going to be knee capped, and like I mentioned earlier, you have China that has deep seek now almost hit the capability of chat GPT. So it’s a holistic conversation viewed through the lens of economic efficiency and national security. It’s not just something in isolation. So I do think Congress will watch this closely and step in if they need to ultimately copyrights within the purview of Congress in the Constitution, so they could change what constitutes fair use, what constitutes to access the copyright or not copyright protection.
Victor Li:
Gotcha. And finally, and I just want to ask you about this too. So as I’m fond of saying if you’re a good lawyer, ChatGPT and those kinds of tools can help you be a great lawyer or an even better lawyer. But if you’re a bad lawyer, then these tools will make you an even worse lawyer, or these tools will make it easier for you to be an even worse lawyer. So you kind of talked about earlier with regards to the hallucinations and people citing bad case law and whatnot. Do you think we’ll see less of that this year because people saw what happened with those lawyers that got in trouble or got publicized for citing these made up cases and people will start to realize, okay, well I can’t just take it for gospel, I still have to check, or do you think just human nature, people don’t know what they’re doing, some people just don’t know what they’re doing or they dunno what the text is about, so it’s inevitable that some people are just going to still make mistakes or do things that they shouldn’t be doing?
Oliver Roberts:
I wish I could say that people would stop doing it, but I think it’s going to be the latter point. And I think case in point is just end of November, there was another one of these situations down in Texas and the judge got fined $2,000 and it was ordered to take a remedial AI course. So it’s still happening now. And even as these tools get better, I think people might get more complacent. And if people feel like these tools are accurate or at least at 95%, then they may be more willing to take the risk just to accept the output. So there’s that kind of reverse presumption, whereas the tools get better, we might actually see more of these hallucinated cases get filed because ultimately these lns are based on probability, and if these are being mass used across the legal space, inevitably, even if the percentage odds of a hallucinated case coming out is low, it will come out just given the mass use of it across the industry.
So I do think we’ll still see it, and the increase in accuracy of these LLM tools might actually lead to greater trust and less oversight and more filed false cases. There’s even a case where an expert, this is probably the greatest irony of ironies. This was in the district of Minnesota. It was a challenge to their deepfake law, and they brought in an expert to opine on the AI deepfake law, and in the expert’s own report, he cited a hallucinated article and the attorney General, and then the attorney general had to file with the court and say, oh, by the way, my own expert filed the hall case. So even experts are doing this. And I think that’s just kind of emblematic of where we’re at and why I don’t think this issue will go away.
Victor Li:
That’s good to know, I guess, or maybe not. Good to know.
Oliver Roberts:
Yeah.
Victor Li:
So finally, if listeners want to keep in touch with you or keep up to date with what you’re doing, what’s the best way to do that?
Oliver Roberts:
Yeah, feel free to reach out anytime. My email is [email protected] and yeah, always happy to talk about ai.
Victor Li:
Great. Thanks again for joining us, Oliver. Appreciate it.
Oliver Roberts:
Thanks so much for having me. Keep up the great work.
Victor Li:
You too. Yeah, I enjoyed this. If you’ve enjoyed this podcast and would like to hear more, please go to your favorite app and check out some other titles from Legal Talk Network. In the meantime, I’m Victor Li, and I’ll see you next time on the ABA Journal Legal Rebels Podcast.
Announcer:
If you’d like more information about today’s show, please visit legal rebels.com. Legal talk network.com, subscribe via iTunes and RSS, find both the A BA Journal and Legal Talk Network on Twitter, Facebook, and LinkedIn, or download the free apps from a journal and Legal Talk network in Google Play and iTunes. The views expressed by the participants of this program are their own and do not represent the views of, nor are they endorsed by Legal Talk Network, its officers, directors, employees, agents, representatives, shareholders, and subsidiaries. None of the content should be considered legal advice. As always, consult a lawyer.
Notify me when there’s a new episode!
![]() |
ABA Journal: Legal Rebels |
In depth interviews with innovative pioneers in the legal profession.