John D. Villasenor is a professor of electrical engineering, law, public policy, and management at UCLA, and...
J. Craig Williams is admitted to practice law in Iowa, California, Massachusetts, and Washington. Before attending law...
Published: | March 31, 2023 |
Podcast: | Lawyer 2 Lawyer |
Category: | Legal Technology , News & Current Events |
Artificial Intelligence has become one of the most discussed topics of our day. With the introduction of sophisticated chatbots like OpenAIs ChatGPT 3.5 and ChatGPT 4, many have predicted that we’re on the verge of a revolution in terms of how many industries operate, including the legal industry.
So what do firms and attorneys need to do to stay ahead of the incoming AI wave? In this episode, host Craig Williams joins guest John Villasenor, professor of electrical engineering, law, public policy, and management at UCLA, to discuss AI and the law, the impact, and what the future holds for the profession under this new AI revolution.
[Music]
Male Speaker: Clients are going to expect that attorneys are going to be benefiting or can take advantage of the efficiencies that AI uses and so an attorney or law firm that doesn’t use those is going to be at a competitive disadvantage.
[Music]
Intro: Welcome to the award-winning podcast Lawyer 2 Lawyer with J. Craig Williams, bringing you the latest legal news and observations with the leading experts in the legal profession. You are listening to Legal Talk Network.
[Music]
J. Craig Williams: Welcome to Lawyer 2 Lawyer on the Legal Talk Network. I am Craig Williams coming to you from Southern California. I write a legal blog called May it Please the Court and I’ve the book out titled How to Get Sued and The Sled.
Well, artificial intelligence has become one of the most discussed topics of our day, with the introduction of sophisticated chatbots like OpenAI’s, Chat GPT-3.5 and Chat GPT-4 many have predicted that they we’re on the verge of a revolution in terms of how many industries operate, including the legal industry. So what do firms and attorneys need to do to stay ahead of the incoming AI wave? Well, today on Lawyer 2 Lawyer, we’re going to discuss AI and the law, the impact and what the future holds for the profession under this new AI revolution. To help us better understand this issue, we’re joined by John D. Villasenor. He is Professor of Electrical Engineering, Law, Public Policy, and Management at UCLA School of Law. And he is the Faculty Co-Director of the UCLA Institute for Technology, Law and Policy. He is also a nonresident senior fellow at the Brookings Institution and a member of the Council on Foreign Relations. He recently wrote a piece for Brooking institutions titled How AI will revolutionize the practice of law. Welcome to the show, John.
John D. Villasenor: Thanks very much for having me.
J. Craig Williams: Well, how did you first become interested in artificial intelligence?
John D. Villasenor: Well, my background is actually in technology. While I am on the faculty at the UCLA School of Law, my degree is a PhD in Engineering and I’ve long worked in the technological side of things. And in the last decade or so I’ve also been working at the intersection of technology and law. And AI has a long-standing interest of mine and I’m particularly interested in the ways that technology has these broader impacts in society and it’s harder, hard to think of a technology that’s going to have more of an impact on society than our AI in the coming year. It’s really a fascinating and important topic.
J. Craig Williams: Can you explain what AI is for people that have never heard of it and don’t understand what it is?
John D. Villasenor: It’s a really important question, because the term is thrown around so often these days, but basically, and you can find different definitions, if you, if you look, for example, on the internet. But basically, you can think of AI as computer systems that that have the capacity to learn, and to engage in the kind of complex observations of the environment and responding to the information obtained from the observations to modify their behavior. A calculator on a smartphone that just multiplies two numbers together, that’s very fast. It’s much faster than person, but it’s not intelligent. It’s not learning. It’s not getting better at that task as you do it more and more. Whereas an AI system, will generally learn from the data from its experiences and become better at executing whatever task it’s supposed to be doing.
J. Craig Williams: What is it that allows a computer to learn?
John D. Villasenor: Well, it’s a combination of things. One thing is data. A computer, if it’s properly programmed and it encounters a lot of data, then it can start adapting its behavior to the data. A simple example that I think many listeners will be familiar with is something like Amazon, which will recommend purchases or Netflix which will recommend TV shows and movies that you might want to watch. What it does is it has seen, for example, in the case of Netflix, the type of content you watch in the past and so it learns, for example, that maybe you like sports or you like action movies. And based on that previous behavior it observes about you, it then starts providing you with suggestions that it believes our grades concluded correctly or incorrectly are more aligned with your interests.
J. Craig Williams: We’ve heard about GPT-4 and so consequently, there’s probably a GPT-3 and a GPT-2. How long has AI been really developing?
John D. Villasenor: Well, yeah. If you really go back to the origin of it all, you know, you can go back to Alan Turing who, back in the 1950s, was asking the questions of can a machine think? Can a computer think? And AI research has been quite active. It’s really since the mid 20th century, it’s been growing, obviously, not as much in the mid-20s — I’m sorry, not as much in the mid 20th century and more as we get to the end of the 20th century. But we’ve seen an absolute hockey stick increase in the last two decades and that’s largely because the capacity or computing capacity, the ability to have enormous amounts of storage, enormous amounts of data and enormous processing power, that’s all just gone through the roof in the last 15-20 years.
(00:05:07)
And particularly the last, say, five years and that’s what’s allowed this field to really take off and have an impact that it never had in the 20th century.
J. Craig Williams: There are some examples that you’ve put in your article in the Brookings Institute that you think that AI can be used for discovery and writing briefs and doing research in law. How does that work?
John D. Villasenor: I think what’s important to underscore, I think, is that the newest AI, the things that have been in the news so much that people have seen examples of on various publications, the newest generation of these large language model tools are truly a departure, they’re truly different from the kind of technology, labor saving tools that lawyers may have had in the past, right. And so, of course, things like digitizing data, the ability to store data digitally and look through the data and to search through it. I mean, that obviously revolutionized how everybody, including attorneys interact with documents. But with these large language models, now with AI and that’s capable of producing output that until now required a human to do, its creative output. The tools before, for example like digital search, they did stuff very fast, but it wasn’t creative. What you see now is these tools being able to do creative tasks that lawyers engage in and to help them do those tasks, for example, much faster than before. And I can provide examples of how it could be used in discovery if you’d like but at a high level, it can do a task that before now would have required a human to do.
J. Craig Williams: Essentially, if you get a truckload of documents and e-discovery response, you can upload those to a language model and then have that language model, analyze those documents?
John D. Villasenor: You can. Now, how good that analysis will be is an open question. I think it’s important when we talk about this to distinguish between today’s technology which is extraordinary relative to what we had five or ten years ago, but really, it’s in its infancy compared to what we’re going to have five or ten years from now. Right? You can analogize. Some of your readers may be old enough to remember the first internet browsers in the mid-1990s and we all thought that was — I’m old enough to remember that and we all thought that was really amazing. But of course, that was nothing compared to the internet that we now have today. Today, these tools are probably nowhere near as mature as they’re going to be. But I think in the not-too-distant future, yes, you will be able to have the proverbial truckload of documents, and you’ll be able to basically feed it to an AI system and have it perform analysis.
Now, that doesn’t mean you just push a button. You’re going to have to know how to use these systems, what queries to give it, what instructions to give it, how to know whether you may have missed something and so it’s not as simple as sitting back and pushing a button. But these tools will be able to do in extraordinary fast timeframes, things that might have taken a team of attorneys, weeks before to do.
J. Craig Williams: In my uses of AI, I use a language model myself, Chat GPT-4 at this point and it pretty regularly gives me misinformation and sometimes it makes things up. How reliable is AI at this point?
John D. Villasenor: Well, it’s a really good question and I think, you know, certainly there’s plenty of examples, including the ones that you’re citing where these language models can give information, which just isn’t true. And you know, so obviously, it would be foolish, given today’s technology for anybody to assume that the outputs of these models are, you know, what they say is necessarily true, and you need to do due diligence. I also think that we shouldn’t mistake the unreliability to the extent that it exists in some of the current models with something inherent to AI. Right?
If you look at the difference between the original Chat GPT released back in November, which I may be wrong, but I think that was based on what some people call GPT-3.5 and GPT-4, which is the version that just got released in the last few weeks. Already there’s a enormous, enormous improvement. I think GPT-4 easily passes, you know, a bar exam and you know, nails the GRE you know, verbal writing section, and it does great on standardized tests much better than Chat GPTs original release date. And so, I would expect that as these models grow in sophistication, they will also grow in reliability. Although I would also say never to the point where it’s always going to be a bad idea to blindly trust these on assertions that may or may not, may or may not be true.
J. Craig Williams: Right? I’ve found that the research skills that I’ve developed as a lawyer over my career come in really handy when I’m trying to develop prompts for the research that I’m doing and I found that the results need to be checked, but are usually pretty good.
(00:10:02)
John D. Villasenor: Yeah, I think and I think that’s a really good way. And that’s again, this is a snapshot of today’s technology and you know, six months from now, it’s going to be different. But even today, I think that’s the right way to characterize it, like pretty good. You can’t completely trust it, but, but really, it can save you a significant amount of time and so even if you know, even if 10% of it isn’t good, while the 90% of it that is good, it can save you quite a bit of time.
J. Craig Williams: John, at this time, we’re going to take a quick break to hear a word from our sponsors. We’ll be right back.
J. Craig Williams: As a lawyer, insurance is one of the last parts of your job you want to spend unbillable hours on. That’s why thousands of lawyers have switched to Embroker. Embroker offers A+ rated insurance for law firms. You can quote and buy instantly online. And if you need help, they have experts on standby. Go from sign up to purchase in 15 minutes by visiting embroker.com/law. That’s E-M-B-R-O-K-E-R.com/law.
J. Craig Williams: While it might not have been covered at law school, the smart lawyer knows that domain names are valuable assets to a client’s business when launching a new product or protecting their brand. GoDaddy’s domain broker service can help you and your client acquire that perfect domain even if somebody else already owns it. Be the business advisor your clients need with GoDaddy’s expert brokers. Visit godaddy.com/dbs to learn more.
J. Craig Williams: And welcome back to Lawyer 2 Lawyer. I’m joined by Professor John Villasenor. He is the Co-Director of the UCLA Institute for Technology, Law and Policy. We’ve been talking about artificial intelligence and its development. But today, the day that we’re recording this, a letter got sent out by a bunch of tech hounds, I’m going to call them I guess, asking for a pause in the development of AI. What’s that about?
John D. Villasenor: Well, the signatories of this letter believe that these large language models and the training of these systems is proceeding so quickly that it’s really getting ahead of our ability to understand their behavior and their concern that, that these are this is raising risks that are best addressed by basically taking our foot off the gas and giving it a pause and thinking about the sort of proper frameworks, potentially regulatory frameworks and other ways to put around these. Now, not everyone agrees that, that’s the right approach. But that, as I understand it, that is the gist of the letter.
J. Craig Williams: What do we have to fear?
John D. Villasenor: Well, there’s no question that AI, like so many other technologies, can be used for malicious purposes. And you could even go analogize it to the internet in some ways, right? I mean, the internet has all sorts of extraordinarily positive benefits with it, we all know well, and the internet also has opened the door to all sorts of negative uses like propagation of misinformation and financial crimes committed with the aid of using the internet and so on. There are legitimate concerns about potentially malicious or unintended consequences of AI systems. At the same time, there’s also extraordinary benefits and again, this is like, like many technologies.
J. Craig Williams: Right? Well, we’ve heard that there have been some deep fake issues that there have been an instance where President Macron in France was seen on a video supposedly bashing his own programs and same has been true for President Biden. What safeguards are in place to protect us from these things?
John D. Villasenor: You know, it’s, it’s going to be a fact of living in the 21st century, or at least the 21st century, in the 2020s and beyond, that the technology to create deep fakes is out there. We all grew up in an era where if you see something on video, you could be confident that it actually happened or pretty confident anyway and that era has ended. It cuts both ways, right? In the sense that the fact is that when we see something on video, we are going to have to ask the question. How confident are we that this is the picking something that actually happened, or maybe it’s deep fake? And it goes the other direction in the sense that when something really does happen, and it’s captured on video, let’s say a politician was captured on video actually saying something highly problematic, then the politician might deny having said it and say it’s a deep fake, and you it can create confusion in both directions. It’s a challenge. There are technologies out there that, to varying degrees, can try to detect these things, but it’s a bit of an arms race, the detection technology improves, but then so does the technology for evading detection and I expect that pattern to continue.
J. Craig Williams: Well, initially, Google did not put out an AI and it didn’t release it, because it had some fears that it expressed outwardly about the validity of AI, but then Microsoft put it out and Chat GPT came out, and now we’ve got barred. Are we moving too fast?
John D. Villasenor: We’ll have to wait a little while before we can conclude definitively whether it’s too fast.
(00:15:00)
I mean, the glass half empty way to look at it is that this is too fast and we’re arguably engaging in developing a technology of which the ramifications are — that we don’t really fully understand the ramifications. The glass half full way to look at it is that the level of progress and investment that is going on in this space currently is truly extraordinary and we are, every day, becoming closer to unlocking some of the extraordinary potential of AI. I’ll just give one example, drug discovery. AI can be used to solve very complex protein folding problems, which in turn ties directly to drug development and so there’s the potential — you know, another example is there’s been documentation of AI, using AI to identify in medical images, tumors that human radiologists don’t necessarily always catch. On the one hand, we don’t want to proceed carelessly and then end up creating negative effects that are a problem. On the other hand, we don’t want to hold back so much that we then delay access to new drugs, better medical diagnosis, techniques, and the many other benefits that AI is going to bring. There is a tension there and I don’t know that anyone has the perfect answer on how to resolve that.
J. Craig Williams: You know, the letter that we talked about earlier, says that it’s going to apply to the Federal Trade Commission and ask for regulations to be put into place. In your opinion, do we need regulations in place laws and if so, where should we start?
John D. Villasenor: I’m hesitant to just say regulate without being more specific, and I think it’s important to underscore a couple of things. First of all, while some of these technologies are new, there is already a lot of regulation that nonetheless apply. I’ll just give an example. Let’s suppose a bank uses an AI system to make decisions on loan applications and those decisions end up being biased against a protected class, biased based on say race or gender or something like that.
Well, first, that would violate The Fair Housing Act. Right. So, The Fair Housing Act, although it was an Act, it obviously without specifically thinking about with AI in mind, it would still apply if an AI system is used in a manner that violates the Fair Housing Act. If you look across these different domains, I think you could you find that there’s a lot of law and regulation in place already that can address some of the negative potential consequences of AI. I think the better question to ask is, well, what other negative possible consequences are there of AI that aren’t that fall outside the regulatory framework to the extent that framework is necessary to address them. And if we can identify a set of these things, then it’s a sensible thing to at least discuss regulation. The other thing I’d like to say, if I may, is that regulating in the tech space is particularly complicated because there’s always the risk of unintended consequences in the sense that you can enact or draft a regulation or enact a new law that targets a particular concern that you have, and maybe it does a reasonable job at addressing that concern, but it can create collateral damage in other ways and I think that’s really important to keep in mind always when regulating the tech space, and particularly in relation to AI.
J. Craig Williams: Then the regulations seem to be outdated almost as soon as they’re issued.
John D. Villasenor: You know, it’s, it is a, it is a real challenge. On the flip side, when Congress tries to sort of get ahead of things with regulation that can go sideways as well, a brief anecdote in 1986, Congress enacted the Electronic Communications Privacy Act, which model, among other things, provided that emails stored more than six months are far less protected from a privacy standpoint than email stored less than six months and the logic was well back in the mid 1980s, who would store emails for very long because you were going to run out of storage space, and that was still on the books. And so, you know, they’re just, it’s an interesting example of how, you know, trying to regulate to a particular point in time, can, can end up being hopelessly outdated and you might think, “Oh well, Congress can just fix the law,” but Congress hasn’t fixed that law and we’re kind of coming up on 40 years.
J. Craig Williams: That’s a real problem. Let’s talk about how attorneys use AI. I mean, should attorneys start to use AI? Are there any ethical considerations? Do we need to disclose to the client that we’re using AI?
John D. Villasenor: Well, a couple of things. Yeah, AI is obviously a broad term, and they’re going to be good AI tools, and not so good AI tools. So, of course, it’s going to be always important to use any of these tools in a manner that honors any of the confidentiality restrictions that might, you know, be associated with, with the information that’s being provided to these tools. So that’s, you know, that’s you don’t want to take confidential client data and, you know, basically paste it into a window that’s just being, you know, gathered by some, you know, entity that you have no relationship of confidentiality with, so there’s those sorts of issues. I think there’s a balance here.
(00:20:00)
I don’t think attorneys should sort of assume that AI is going to do everything they’ve done before much better because I don’t think that’s true. But I also think it would be a mistake to assume that AI doesn’t have a role in the modern practice of law, I think especially as the technology develops, it’s going to be a time saving tool that is beneficial to the attorney because the attorney can get, produce an end product like a, you know, like a motion to file with the court or a document to update a client on the status of a legal matter or a new contract, it’s going to be able to help the attorney do those things a lot faster, a lot more efficiently and clients are going to expect that, right clients are going to expect that attorneys are going to be benefiting are going to take advantage of the efficiencies that AI uses and so an attorney or law firm that doesn’t use those is going to be at a competitive disadvantage.
J. Craig Williams: Well, John, it’s time to take another quick break to hear a word from our sponsors. We’ll be right back.
And welcome back to Lawyer 2 Lawyer. I’m back with John Villasenor. He is a profession at UCLA Law and technology. We’ve been talking about AI and how attorneys use AI. Let’s talk about copyright issues and plagiarism. Anything to worry about there?
John D. Villasenor: Yeah, there is. It’s a, it’s a complicated issue. As some of your listeners may know, there’s actually some lawsuits ongoing right now in relation to images, there’s some AI systems that can generate images, you can give it a prompt, like, you know, portray a picture of two cats playing chess and things like that. And it produces these really interesting, photorealistic images. And so there’s some lawsuits right now, from companies, I believe Getty is plaintiff in one of the lawsuits, Getty Images, and basically alleging that the makers of the, of at least one of these image producing AI systems violated copyright by basically using their images as training data for that process. So there are, there are interesting, open legal questions with respect to and they’re not resolved. The courts have never addressed these before and statutory laws is silent really and the question is — really a question of what constitutes fair use and of course, the case law and fair use doesn’t give us a definitive answer to these questions. So I think copyright, copyright is important. You know, plagiarism, obviously, you don’t want to, you don’t want to plagiarize, you know, another attorneys work. At the same time, it’s often very common for, you know, the same text or almost the same text, you know, let’s say, an attorney is, you know, submitting a motion before a particular court and in doing so the first several pages of the motion might cite some of the relevant case law from you know, that circuit, or if it’s a federal, if it’s a federal, in federal court, or from the Supreme Court, I mean, many multiple different filings in different cases can still reasonable use the same kind of background information on particular motion. So I don’t know that I would call that plagiarism if they’re using, you know, prior work from their own firm on different cases. So I don’t think reusing text is always necessarily a bad idea, as long as it’s done in a way that doesn’t violate any ethics laws, or ethics, ethics, customs.
J. Craig Williams: Let’s talk about the classic situation. Are you and I are both old enough that we remember when the internet started so that means that we probably have kids or know of kids that are using these things these days. What about an assignment from a high school teacher that says draft an essay on this and you prompt the language model to give you a result, and you turn it in? Is there a problem with that?
John D. Villasenor: Yeah, I’ve actually, you know, I published a piece in Scientific American a couple of months ago, where I explained why in my class at UCLA School of Law, I tell my students, they’re free to use Chat GPT and they’re writing. And what I also tell them is that they are solely and fully responsible for anything they turn in under their name. If it’s partially plagiarized, they’ve committed plagiarism. If it’s logically or stylistically inconsistent, and that’s on them, if it’s factually wrong, that’s on them as well. But I do think, again, not I know, not everyone agrees. But I think for students today, and this is true for high school students, college students, graduate students, these students are going to be practicing. It’s extraordinary if you do the math, and I find it stunning every time you do it. But these students can be reasonably expected to be practicing this in 2060s and 2070s. Right. That’s way down the road and that in the mid-21st century, this is going to — AI is going to be a tool and we should teach them to engage with it responsibly and ethically and productively not to be afraid of it. And that’s the philosophy I bring to my teaching and that’s what I think the best approach is, I understand some other people don’t view it that way.
J. Craig Williams: We’ve seen the robot in court trying to get out of a red-light ticket for as an attorney for a particular defendant. Will we see AI take over as attorneys?
John D. Villasenor: AI, a lot of things AI can’t do right. AI can’t make a convincing presentation to a jury. AI can’t motivate a team of attorneys to produce their best work AI can’t possibly replicate the human element of client relationships which is such an important part of law.
(00:25:01)
There’s a lot of things that AI can’t do. I don’t think we’ll see a day where a trial consists of two robots arguing back and forth in front of a judge or jury. And so I don’t think we’ll see that but I do think we’ll see AI playing an increasingly significant role in, in the information that gets how documents get produced. I’ll just give you another example you can imagine in a trial, you can imagine an AI engine analyzing the transcript in real time, and using the results of that analysis to suggest questions that say, an attorney might want to ask in a cross examination, for example. I think you’ll see AI being used in lots of different ways, but not to replace, not to replace the human element, which is such a profoundly important part of so many careers, including law.
J. Craig Williams: I’m just waiting for the political cartoon that comes out with two robots in a courtroom talking to a robot judge.
John D. Villasenor: That’s, that would be strange. I don’t think we’re I don’t think we have to fear that I don’t think that’s coming anytime soon.
J. Craig Williams: What do law firms need to do to get ahead of the curve on AI? I mean, I know that I’m using it for research, I’m using it for writing, and I’m even some marketing techniques. What do you suggest?
John D. Villasenor: I think it would be who’ve a law firm to make sure there’s somebody in the firm, and perhaps a few people in the firm who are really tracking this technology, and you know, understand what product offerings are out there, what product offerings are, on the roadmap going to be coming out soon, it’s like any other field, there’s going to be a lot of competition, a lot of startups and not a lot of new products, some of them are going to be really great, some of them are going to be less great, some of they’re going to be really expensive, some of they’re going to be less expensive. And I think a law firm shouldn’t be flying blind in that environment. And I think it just, it will be important to have knowledge about the state of play of these tools, and then to make decisions obviously, about which tools to adopt because it’s, you know, it’s complex, from, you know, from law firms going to adopt a particular tool is a cost associated with that as a learning curve. There’s a switching cost and you want to be prudent in that respect as well because you can, you know, if you’re switching tools every 30 days, well, that’s going to create all sorts of inefficiencies as well. But I think it’d be a mistake to ignore it, you want to track it, and understand how it can potentially help your practice. That’s, that’s, that’s what I would do. If I were, if I were in charge of making these decisions, which I’m not.
J. Craig Williams: Other than the class that you’re teaching. Are you aware of any other law school classes or classes that are out there where people can learn or attorneys can learn how to use AI?
John D. Villasenor: Well, there’s, there’s, I saw an announcement today for a — I think it’s a one day program, somebody’s running about AI in the law, and I would expect that there are people sort of furiously spinning up courses and products and things right now and I think we’re going to be you know, and I would imagine venture capitalists are probably investing in new tools and I think we’re going to see a, a burgeoning ecosystem of, of tools and opportunities to learn about these things. And again, it’s going to, you know, people are going to have to be discerning consumers, not all going to be good. But there will be good stuff out there that will, that if you’re discerning and find the right stuff, will be, will be a provide an opportunity for people who want to learn more about this to do so.
J. Craig Williams: You know, not that I want to promote any one, particular one beyond the ones that I’m using. But your Brookings article mentioned —
John D. Villasenor: It mentioned, there was a company called Casetext.
J. Craig Williams: Casetext.
John D. Villasenor: Which has a product called CoCounsel, and I should emphasize, I have absolutely no financial relationship of any kind with this company. So I’m not mentioning them out of some sort of conflict of interest, I just, I just found it interesting in the sense that they, at least according to news releases, they have some sort of relationship with, I guess, with open AI, or at least access to those some aspects of those large language models, which, as we all know how good large language models are and they’ve, they’ve, they’ve offered a product, which they claim can help make legal research much more efficient. I would expect that they’re one of what was, what will almost certainly be many companies offering products in this space. But the fact that they’re out there already is sort of a an existence proof that this is an area that lead entrepreneurs in the legal tech space are seeing real opportunity in.
J. Craig Williams: Right? Well, John, it looks like we just about reached the end of our program. It’s time to wrap up and get your final thoughts and your contact information.
John D. Villasenor: Okay, so anyone who’s, you know, I’m, I’m easy to find on the internet. I’m on the faculty at UCLA and so people could send me an email at UCLA, if they want to do that it’s [email protected] and that’s public information. It’s on, it’s on the website there. Yeah, so that’s how I could be contacted.
J. Craig Williams: Great. And what are your concluding thoughts? How do you think we should help people to think about this?
John D. Villasenor: You know, I think it’s a, it’s an extraordinarily interesting and important development, these latest developments in AI and not just for the practice of law, but for, for society. Generally, it’s going to impact, have extraordinary impact across many different sectors of society, it’s going to be very interesting to see how this develops and I think we, you know.
(00:30:05)
If you, if you look at the stunning progress we have seen just since late November, it’s hard to even imagine, you know, the things that we’ll see in the next six months, year, five years. So it’ll be, it’ll be interesting and I think we’ll see some really, really fascinating and important tools that are going to, you know, they’ll have obviously some downsides, but we’re going to see some enormous benefits from this technology as well.
J. Craig Williams: Great. John, thank you very much. It was pleasure having you on the show. We would like to thank Professor John Villasenor from UCLA for participating today.
John D. Villasenor: Thank you very much.
J. Craig Williams: Well, here are a few of my thoughts about today’s topic. Attorneys in my day, you said, print out emails and have their secretaries — dictate their responses to their secretaries, and then the secretary’s type up the response and send the email. Those days have long gone. Attorneys are expected now to be able to function with technology and if you’re still in that category, it’s time to get ahead of the curve. Jump in, download Chat GPT-4 or look at it online. Start to put in some research queries and figure out how to use it. If you don’t know how to use it, perhaps take a class. But as Professor Villasenor says, it’s coming and you need to be ready for it. Well that’s it for today’s rant on this topic. Let me know what you think.
If you like what you heard today, please rate us on Apple Podcast or your favorite podcasting app. You can also visit us at legaltalknetwork.com where you can sign up for our newsletter. I’m Craig Williams, thanks for listening. Please join us next time for another great legal topic. Remember, when you want legal, think Lawyer 2 Lawyer.
[Music]
Outro: Thanks for listening to Lawyer 2 Lawyer produced by the broadcast professionals at Legal Talk Network. Subscribe to the RSS feed on legaltalknetwork.com or in ITunes. The views expressed by the participants of this program are their own and do not represent the views of nor are they endorsed by Legal Talk Network, its officers, directors, employees, agents, representatives, shareholders and subsidiaries. None of the content should be considered legal advice. As always, consult a lawyer.
[Music]
Notify me when there’s a new episode!
Lawyer 2 Lawyer |
Lawyer 2 Lawyer is a legal affairs podcast covering contemporary and relevant issues in the news with a legal perspective.