Greg Siskind is a founder of Siskind Susser, a national immigration firm based in Memphis, Tennessee. He...
Sharon D. Nelson, Esq. is president of the digital forensics, managed information technology and cybersecurity firm Sensei...
Director of the Oklahoma Bar Association’s Management Assistance Program, Jim Calloway is a recognized speaker on legal...
Published: | April 27, 2023 |
Podcast: | The Digital Edge |
Category: | Legal Technology , News & Current Events |
Pain points in your legal practice might just be easily solved with a little automation. Jim Calloway and Sharon Nelson chat with Greg Siskind about his firm’s leading-edge relationship with AI and automation technologies. Greg’s simple, straightforward insights help listeners understand the ethics of AI use in law and how to leverage it to improve your firm’s workflows and client experiences.
Greg Siskind is a founder of Siskind Susser, a national immigration firm based in Memphis, Tennessee.
[Music]
Intro: Welcome to The Digital Edge with Sharon Nelson and Jim Calloway. Your hosts, both legal technologists, authors, and lecturers, invite industry professionals to discuss a new topic related to lawyers and technology. You are listening to Legal Talk Network.
Sharon D. Nelson: Welcome to the 182nd edition of The Digital Edge: Lawyers and Technology. We’re glad to have you with us. I’m Sharon Nelson, President of Sensei Enterprises, an information technology, cybersecurity and digital forensics firm in Fairfax, Virginia.
Jim Calloway: And I’m Jim Calloway, Director of the Oklahoma Bar Association’s Management Assistance Program. Today our topic is, The AI Revolution in Law Practice: A Conversation with Attorney Greg Siskind.
Our Guest today is Greg Siskind, a Memphis immigration lawyer practicing since 1990. He opened his solo law firm in 1994, at the same time he launched his website, one of the first for a law firm. Today, his Law Firm has 60 people and he split his time between working on the law firm and working on a spinoff tech and content company called Visalaw.ai. Thanks for joining us today, Greg.
Greg Siskind: Thanks for having me on. I listen to this show all the time, so I — it’s exciting to be a guest.
Sharon D. Nelson: Well, we’re always happy to hear that. And Greg, can you start by giving us an overview of how AI is being used in the field of immigration law particularly?
Greg Siskind: Sure. So, I mean, obviously immigration lawyers use a lot of the same functions in AI that other firms use, and broadly speaking, I mean, we’ve all been using AI for years and probably don’t think of some of the tools we’re using as AI any more like Spell Check and Grammarly and Siri and Alexa and Google Searches etc. But if I’m thinking specifically about immigration law use cases, I can mainly speak from our firm’s experience and I can tell you about some of the tools that we’ve built and some of the things that we’ve bought that we’re using.
So for example, we’ve used for a number of years Afterpattern, which is a software company that does what’s called Rules Based AI, which are, were expert systems where they’re basically decision trees that you map out and then script essentially so that it gives you answers to questions or builds documents, that sort of thing.
So we create Eligibility Advisors where we will take a complex type of immigration case and develop an app that our lawyers can use to determine if a client or potential client is potentially eligible for a certain immigration benefit. Immigration lawyers mostly use case management products that are specific to immigration law and some of those products are now, they have smart intake forms, which are dynamic forms that change up depending on how a person is answering them so that you are not giving your client a lot more questions to answer than to need to. They have features like image recognition, uploads, where you can upload a receipt from U.S. Citizenship and Immigration Services, and it can read what’s on that receipt and enter the information in the correct fields in a case management system.
So maybe there’s a deadline to respond to a request for evidence that might update in the system. It will update obviously the case numbers and other information, the government gives you from that document. We’re using AI for document generation, so in some cases it’s sort of mundane documents like an engagement letter or it might be something very immigration specific like for an H-1B application, we have to create something called a Public Access File for our clients that has to be produced in one day after we file the application, involves a lots of uploads of documents, original documents that have to be created, and it’s a real pain point in the practice that we’ve automated.
We’ve built Data Scraping Tools for some of our mass litigation cases that we can get really interesting analytics from on lots of people very quickly at the same time for tasks that used to take a long time. Legal research, I think that folks probably have heard of ROSS Intelligence. We were using them back in the day and now we’re using CoCounsel and other generative AI product. And then we’ve also built a bot that we’re using for clients when they are onboarding with the firm to collect information from them, and also it helped them schedule appointments and initial consultations with lawyers, we have that on our site.
So those are some of the ways that we’ve been using it and obviously now with GPT we have all kinds of new things that we’re working on that hopefully we will be able to get into today but I can tell you that drafting is the next big change that we’re going to be seeing in our practice. The way that we put together our writing and our petitions and briefs etc. So that’s some of the stuff we’re working on.
(00:05:13)
Jim Calloway: Greg, how do you decide what types of tasks you delegate to AI tools and how has that impacted your workflow?
Greg Siskind: Sure. So we’re always looking for where the pain points are in our practice, the things that are — first of all that people just hate to do because they are mind-numbing or they had lots of opportunities for mistakes.
And so, one of the things that we’ve been doing in recent years, more so than we had in the past is really taking basically every aspect of our practice, particularly on how we do our cases and mapping out those processes, and then looking at them really at a granular level step by step of how we actually do a case in our office and look for places where the automation might be possible. It might be that we can use AI or it might be that we can use traditional software for that and it might be that sometimes it’s really not a tech solution it’s sort of just rearranging things on the process map of how we are doing it, you know, with humans.
But that’s usually, I think the foundation for figuring out, not just with AI but with software in general, whether you’re wasting your time or not is if you sort of have an idea of what problem you’re solving with the software. And that’s, that’s usually where we start. Sometimes it’s sort of a Eureka moment and then sometimes it’s like it would be nice to solve this problem and we think about it for years and years and then eventually the software comes around to actually solving it.
So there’s not a sort of a one-size-fits-all way but that’s sort of our general approach usually of what we’re trying to do with AI and with other software products.
Sharon D. Nelson: Well, one of the big things that everybody worries about today is what ethical considerations do lawyers need to keep in mind when they’re implementing AI in their practice, and of course that’s had a lot of publicity with respect to the new AIs?
Greg Siskind: Sure. Yeah. I mean, I have to think about that a lot because we don’t really have a lot of Guidelines. And yeah, we have the rules, obviously, and so you have to be really kind of dig into them and think about, which ones apply and which ones don’t. So I guess when thinking about this question, I guess probably the more interesting one is to think about it from the Generative AI perspective with ChatGPT versus sort of issues that we’ve already been dealing with for a couple of years.
But, there are a few I think that come to mind. I mean, first, you’ve got Rule 1.6, and protecting client privacy and confidentiality, you know, that’s obviously always the case when you’re using software and putting things out on the cloud or wherever your data is going to be stored. But I think some of us were surprised because this is, the company is really sort of been flying high, but OpenAI has already had a data breach. I guess it’s been a last few weeks and people saw their search histories disappear on there, and it made me think more about it. I know that Dan Katz who is a real thought leader in this area, he did a talk a couple days ago and that was one of the questions that he had addressing that he’s really still need to be very cautious about putting confidential or private data into ChatGPT. So, I think that that’s one to pay attention to.
You’ve got Model Rule 7.1 regarding making false or misleading communications regarding a lawyer’s services and the issues might come up about at the ethics of passing off GPT ghost written document as attorney work product, and I think that that’s something that’s probably going to be a problem when lawyers, and some lawyers going to get into trouble with that probably fairly soon.
There is the Duty, Rule 1.1 Duties of competent and diligent representation, where you’re letting the AI draft in areas that you really don’t know very well as far as the type of law that it’s drafting on, and you really have to know the limits of the tech and you’re verifying essentially the content that’s generated by the AI. So you better be competent in that area rather than just assuming that the AI is competent.
Then we also have things like Rules 5.1 and 5.3, which is supervising paralegals and attorneys and it’s not only whether you’re using those kind of products, but you really need to have set rules for how your team is using Generative AI in the practice and ultimately, it’s your responsibility to supervise how they’re doing it. So just because you’re not using it, if somebody else that you’re supervising is and you’re not on top of that, that could create problems for you.
Like, some of the legal products that have come out and past years using AI, there’s still the questions of UPL, unauthorized practice of law and you need to make sure that the AI generated legal advice that you’re producing is reviewed by a lawyer and use strong disclaimers.
(00:10:02)
And as an aside on that, there was an interesting discussion that I just got into yesterday regarding OpenAI’s new usage guidelines, and one of the things that they actually have a provision in there now on unauthorized practice of law, and the flip side of that is telling government agencies that you can’t use their product for immigration decision making or law enforcement decision making, so they’re tying up both sides, but that’s something I think that’s going to be a big issue in the years to come is whether these products are going to create ethics issues for lawyers.
And then there’s things I guess that are sort of legal ethics adjacent like copyright violations. If you are pulling stuff out of ChatGPT and it turns out that it was doing the lifting from copyrighted documents, that could create problems for a lawyer as well.
Jim Calloway: What are some of the biggest challenges you faced in implementing AI in your law practice?
Greg Siskind: Have been a lot over the years and they seem to kind of changed as time goes on. But there have been issues in the past with costs and sometimes these products can be very — sometimes afraid when they’re coming out where you can get somebody you can use them standard Beta format, but sometimes they are pretty expensive. And I remember when we started building expert systems seven years ago or so maybe a little bit — actually it’s — I think it’s 2015 that we started working on them. There wasn’t a lot of competition in this space and it was a very expensive for a smallish firm to use it and the product that we were using for a couple of years was mostly Big Law that was using it and they were charging mostly Big Law prices.
So we had to negotiate basically explained that it would be, it would be good to have a little immigration firm mixed in with the big corporate firms as far as giving an interesting story for the company to talk about in their marketing. But that was the beginning and actually that, and that product got competition and we were able eventually to get it at a much more manageable price point. These are, and I’m talking about no-code and low-code platforms. But speaking of no-code and low-code platforms, one of the other challenges that we had at the beginning that’s gotten a lot better over the years is that they were not so easy to use despite the name. And that’s, that’s gotten a lot better as far as these products becoming user friendly so that lawyers and not just sort of like your in-house tech professionals or a tech person that you source to work out to was doing most of the heavy lifting.
So like with Afterpattern for example, the product we’ve been using for several years we probably have seven or eight of our lawyers in our firm that are able to make apps for themselves and don’t have to put it through somebody else in the firm, some of our paralegals and legal assistants can use, and then we actually have a person in the firm that’s dedicated to producing apps for us using that software.
And then the third obstacle is one I think that lawyers will appreciate it, and that’s the amount of lawyer time. I think people oftentimes think of the big difficulty with the software is, how much time it’s going to take to learn how to use the software, but in a lot of cases it’s not the software, learning the software that takes that much time and becoming a power user of it, it’s just, it can be very lawyer intensive to figure out exactly how you want to map out what that software is doing. It’s — to the extent, lawyers can actually use the software, design apps, it’s wonderful because it’s — you can just take what’s in your head and put it on the screen as far as making the software do what you want, especially when you’re conducting a legal interview or building out a logic stream that — if this, and this the legal consequence, if that, that’s the legal consequence, but that is very labor-intensive and time-consuming and may get better with Generative AI, but still building out the software is going to, I think be a slog for firms for a while.
Jim Calloway: That’s very interesting. Before we move on to our next segment, let’s take a quick commercial.
[Music]
Jud Pierce: Workers Comp Matters is a podcast dedicated to exploring the laws, the landmark cases, and the true stories that define our workers’ compensation system. I’m Jud Pierce and together with Alan Pierce, we host a different guest each month as we bring to life this diverse area of the law. Join us on Workers Comp Matters on the Legal Talk Network.
[Music]
Craig Williams: Today’s legal news is rarely as straightforward as the headlines that accompany them. On Lawyer 2 Lawyer, we provide legal perspective you need to better understand the current events that shape our society. Join me, Craig Williams, and a wide variety of industry experts as we break down the top stories. Follow Lawyer 2 Lawyer on the Legal Talk Network or wherever you subscribe to podcasts.
[Music]
(00:14:56)
Sharon D. Nelson: Welcome back to The Digital Edge on the Legal Talk Network. Today our subject is, The AI Revolution in Law Practice: A Conversation with Attorney Greg Siskind. Greg is a Memphis immigration attorney, who has been practicing since 1990. He opened his solo law firm in 1994, at the same time, he launched his website, one of the first for a law firm. Today, his firm has 60 people and he’s splitting his time working between the law firm and a spinoff tech and content company called Visalaw.ai.
So, one of my big complaints about say ChatGPT among others is that fundamentally it’s a black box, you can’t see inside of it, you don’t know how it’s working. So Greg, how do you ensure that the AI tools you use are accurate and reliable?
Greg Siskind: I sort of approach it the same way that I do my writing, and I spend a lot of my time not just working on software but also writing books. And I have a — one book in particular out of the seven titles that I’ve been an author on, is a system’s manual for immigration lawyers, and it’s a very long book, it’s about 4,000 pages and I co-write it with one of my partners. And it’s a huge amount of content every year and there are a million things that could be wrong in that book.
So the way that we deal with that is that we have reviewers that we send out chapters to for better subject matter experts, we send it to our colleagues as well. And then also we hear from readers and make changes quickly when we find out that somebody still has missed something that’s there. I don’t think it’s that different than the way we’ve approached over the years, the rules based apps that we built like the Eligibility Advisors and some of those we’re building for ourselves and now you mentioned that, I mean some of those, that’s why we have a software company, some of those we’re actually building out for other law firms. But the way we’ll approach it is similar, which is that, we will build this app like we’re writing a book and sometimes these apps can take a year or two years to build, so it is like writing a book. And then we will go through that kind of testing process again where we will get the feedback.
But I will tell you that one of the things that we started during the Trump Administration on immigration would seem like, was major headlines in moving from one crisis to another where we would want to get news out very quickly on something, and we would, if Trump would impose a travel ban that would have a bunch of rules attached to it, we would generate an app that would be out the same night. But the announcement was made and that’s scarier because there are more things that can go wrong with that, but we also sort of told the way we dealt with our disclaimers and what we put it out there was this is for the sake of getting this information into people’s hands quickly, people should understand that there may be — this app may be changing, there we may have, as we get a better understanding of what’s happening.
And for the most part, I think it’s sort of just being candid with people about what you’re giving them as opposed to guaranteeing that everything in there is perfect. That’s how we’ve approached that issue. But with ChatGPT I think that it can be a little bit scary if you’re just putting information out that you can’t totally predict what it’s actually saying. That would make me pretty nervous as far as like building an app or something like that, where people ask questions and get information and you haven’t actually vetted what’s being provided.
So I think that lawyers going to have to be really cautious about that under disclaimers, but potentially not offering those kinds of products were trained, making sure that it’s trained exactly, they trained on your own materials, and not on something that you haven’t been able to vet and being very, maybe automating the prompts which are the questions that you’re asking, so that you can have a lot more control about what contents being produced. That makes sense.
Jim Calloway: Can you give us an example of how AI has helped you achieve a better outcome for your clients?
Greg Siskind: Sure. So one that I like to talk about because it’s recent and it’s been in the news but it’s also say, it was a pretty gratifying result. So we work on mass immigration litigation and usually it’s trying to affect some kind of, have some kind of impact on a public policy issue. There has been of course, a major war that’s been going on in Europe between Russia and Ukraine and President Biden set up a program to allow at least a 100,000 Ukrainians to come into the country through something called uniting for Ukraine provides a benefit called, something called Humanitarian Parole.
Anyway, we just by chance we had discovered that Congress that stuck a provision in a bill last May that allowed automatic work authorization for people when they’re coming in. We found that USCIS the agency that manages all that was telling people they had to apply for a Work Card that was taking at least eight months to issue and charging $410 for that, which was something that wasn’t easy.
(00:20:01)
So we filed a mass case for 150 people in Chicago and we used AI throughout that case. So one example was, we had to have all of these 150 people sign engagement letters. So we built ten app that automated that process through Afterpattern and then they were able to sign it online.
Another thing that we needed is you have to get a sworn statement from every one of the plaintiffs, and that’s not an easy thing to do when you have plaintiffs all over the country and sometimes their English isn’t great. So we also built an app that automated that interview and generated a sworn statement for them that they could review and then sign if they were happy with it, and we were able to shave a lot of time off the process for how, how quickly we’re able to file, and obviously we’re not a huge firm so we can’t throw a million people at a case like that and it saved us a lot of labor.
And then one last thing that I talked about with CoCounsel, a lot of people are talking about that product. We’ve been tester of that product since last December I think, November December, and we use that multiple times in the Ukraine case for legal research. So, for example, we had a — I needed to research something called the Tucker Act to get these Ukrainians fee refunds. And that was the subject I knew very little about because it’s not really an immigration law topic, and something that would have probably taken me a couple days to research, I was able to get an answer back in about five minutes, it’s a 20-page Memo that gave me a summary of what the law was and how it’s going to apply in my case, and then a nice set of all the cases that covered it, links to those cases, bullet point summaries of each of those cases. So, super helpful, super time-consuming, and time-saving, not consuming.
Sharon D. Nelson: That’s one of the happier answers I’ve heard, a very good usage indeed, so that was inspiring. Thank you. How do you balance the use of AI with the need for a human touch in the legal profession, because that’s so important component?
Greg Siskind: Yeah, and I think that’s what a lot of people are thinking about right now. I mean we’re hearing all these horror stories in the news like that 44% of the tasks that’s law firms work on are going to be automated and we’re all going to be out of work. And there’s not going to be anything left for us to do. But I really actually think what’s going to shift is that there’s going to be a lot more of the human touch in how we deal with clients where we’re spending a lot more time hand-holding and acting as counselors to our clients and we’re going to be seeing — the technology that we’re using to manage our cases clients, I guess we’re going to think of them in the same way now, maybe we think of accountants when you go to your accountant now, you’re not thinking about that accountant personally filling out the tax forms and, or even what software they’re using. You’re going to them for their counsel and you don’t really care what they’re using, you’re just assuming that they are using the best and that they are providing you top advice, and that’s why you’re paying an outside accountant.
And I think law firms are going to be seen more as trusted advisors for giving strategic advice, or managing your process for you and making you not have to worry about it yourself as opposed to, I think what a lot of people are envisioning is like oh the whole world’s going to go DIY, like the travel industry with travel agents becoming a thing of the past and I actually think that that’s, that’s not how it’s going to turn out. I think lawyers and paralegals too, who will be I think playing more role of software managers that there will be plenty of jobs and in fact, what were, will probably be seen as being even more valuable for those relationships now than basically just people who type data into a computer.
Jim Calloway: What advice do you have for other lawyers who are interested in using AI in their law practices?
Greg Siskind: You don’t need to be in a panic about trying to make super fast decisions. I keep talking to lawyers who are saying it’s like we’re about to be mode over if we don’t really make changes immediately. And I think that, that’s — but also, there’s also a lot of lawyers and we’ve seen for years that legal tech takes years and years longer than other professions to work its way. And I think this time is actually different, it feels different and I think that there will be a lot of movement, but I would urge not making rash decisions. Just start by educating yourself on how these products work. Keeping an eye on the marketplaces, we probably in the near future going to see an explosion of startups offering products incorporating Generative AI for law firms.
So I think that one thing that we can all get a lot better at right now is something called prompt engineering, and I’ve mentioned before about sort of how to ask, that the prompt is how you asked ChatGPT to do something that you want it to do. And at first, I thought, okay, this is like just typing a quick query into Google, but as I’ve been learning like, lot of us have in the last couple of months is, there’s an art to actually getting a much better answer, and getting much more coil which you’re wanting.
(00:25:02)
One example of that is, so I write a column every other month, I (00:25:06) on marketing for the ABA’s Law Practice Magazine, and I want to write a column this month on how to actually explain to clients that you’re using Generative AI like ChatGPT and how to message that to them and what are some of the issues, sort of not talking about the tech, but talking about the messaging and the communications with clients about the tech.
And one of the things that and I will admit that I found this prompt that somebody had in an article, but it was really excellent. And basically, it was tell ChatGPT, I say, this is essentially the script, I’m a lawyer with 30 years of practice as a managing attorney. I’m now a consultant in this space or you’re now a consultant in this space, you’re telling a ChatGPT like it’s role playing, and I want you to write a 1500 page, a 1500-word article about and then this particular question and it was — I did that and I asked it to do it on this question and it produced a really wonderful document that is gave me a lot of ideas. It’s not my — the way I would word things because I have my own voice on that, but it gave me a lot of ideas for things to talk about in that article. Just a really simple thing that we can start with which is learning how to actually be good prompt engineers and how to delegate tasks to AI in a better way and I think that’s a start.
I also think that you need to be a savvy consumer and that probably means looking for good sources of news in the space, and there’s a number of emails that are out there now so I’m going to focus just on this topic like one that I get every day, that’s excellent, it’s called simplehuman. But there are some that are lawyers specific like Attorney at Law and Top Law and the ABA has a tech newsletter that comes out. And then obviously, there are podcasts like this one, and then there also are new podcasts that are just dealing with Generative AI. One that I listen to, it’s just 10 minutes twice a week, it’s called ChatGPT Report and it’s pretty good. It really kind of tells you like what the latest developments are in the last couple days and sometimes they will take a product out for a test ride and tell you what they think of it. But that’s how you — that’s how I would approach it. It’s just become savvy consumers and don’t rush to make a decision out of panic.
Sharon D. Nelson: Just to reaffirm some of that, Greg, I have been working with getting better prompts too and learning myself. I’m amazed at how much education I’m receiving here from ChatGPT just by learning how to use it. But just this morning, I came across an article that our listeners might be interested in and it is literally called, How to write better ChatGPT prompts, and that is from ZDNet. So if you’re interested in trying to figure that out for yourself, this one is actually fairly lengthy enough that there’s some real substance to the education. So that might be of use to people who want to get started on that.
Greg Siskind: Yeah. And I will predict that within the next couple of weeks or couple of months or may it’s already happened and I’m not aware of, but there will be CLE programs that are just geared toward lawyers on how to write prompts.
Jim Calloway: Before we move on to our next segment, let’s take a quick commercial break.
Female: InfoTrack is hosting Legal Up, a free, fully virtual event to give legal professionals an opportunity to learn new skills. Adopt the latest tech, discover ways to be more efficient, and get inspired. Join InfoTrack April 27 and 28 at the Fifth Annual Legal Up Virtual Conference. Register today at infotrack.com/legalup.
[Music]
Adriana Linares: Are you looking for a podcast that was created for new solos? Then join me, Adriana Linares, each month on the New Solo Podcast. We talk to lawyers who built their own successful practices and share their insights to help you grow yours. You can find New Solo on the Legal Talk Network or anywhere you get your podcast.
Sharon D. Nelson: Welcome back to The Digital Edge on the Legal Talk Network. Today, our subject is, The AI Revolution in Law Practice: A Conversation with Attorney Greg Siskind. Greg is a Memphis immigration lawyer practicing since 1990. He opened his solo law firm in 1994, at the same time he launched his website, one of the first for a law firm. Today, his firm has 60 people and he splits his time between working in the law firm and working on a spinoff tech and content company called Visalaw.ai.
Greg, how do you see the use of AI impacting the cost of legal services for clients and as a side note, because I hear this all the time, because I too lecture on ChatGPT all the time, do you worry about AI taking the jobs of lawyers and paralegals? You mentioned it briefly but maybe expand a little.
(00:30:00)
Greg Siskind: So I do think that the use of Generative AI is going to drive down in probably fairly significantly the cost of legal services for clients and I think that that could have, you know, it’s really kind of hard to predict exactly the ways it’s going to impact but in some respects it may push out, and I think it could have different impacts depending on the kind of practice you have, are you a paralegal intensive practice, are you one that has a — where every matter of looks very different or one that there is a lot of volume and similarities between cases.
So I think that it’s going to vary from firm to firm. But I do think that what’s real and measurable right now is if it really does drive down costs and some of those and all of those costs are passed on to much more Affordable legal services, it could expand the pie fairly considerably as well for the consumers of legal services. If a lot more people have access to lawyers than they currently do. So that’s something I think that, it may at the end of the day be awash where we are just — we’re servicing a lot more people, it costs a lot less of our time to service those people, we’re charging them a lot less, but at the end of the day we’re still doing fine economically and people still have jobs.
I’m hopeful that that’s going to be one part, and I also think that for a lot of lawyers, they will make money on the tech, the apps that they build, basically taking their knowledge and turning it into Generative AI products. And I think for a lot of firms if that’s going to be where the model goes, and then I think there’s going to be this whole world of unbundled legal services which has been talked about for decades where lawyers basically pick up, when people do a lot more DIY with apps and then take their work to a lawyer and have them give the blessing and sort of look it over before it goes in. So there might be that kind of thing going on as well. But it’s, it’s really hard to predict.
Jim Calloway: Finally, what excites you the most about the future of AI in the legal profession and what worries you? Do you think as so many are saying now that we need guardrails around AI assuming one can do that in a global economy?
Greg Siskind: Sure. So, I mean I think this is one of the most exciting times I’ve experienced in my long career. So, even more so probably than at the beginning when I was getting on the web and that was a brand new thing, lawyers generally speaking in or in service professions you can deliver on two of three things. And if you deliver on all three, you’ll go out of business. But you can get faster speed, you can do better, better quality or you can do it at a lower price, and this is really sort of the first time I can see delivering on all three of those things which I think is going to really supercharge a lot of our practices and allow us to do better work than we ever have at a more affordable costs than we ever have. So, I’m excited about all that and I really think that a lot of firms are going to, a lot of people are going to be a lot happier with their work when they can do it in that kind of way.
As for what worries me, I think I’m less concerned about the impact of AI on lawyers and private practice, and for society in general. And I think we need regulation like we have with the FAA. We probably need some alignment with regulators in other countries, like you were mentioning, Jim. And I have concerns on the impacts on politics, on privacy, on our ability to read and write well, I mean our children going to be able to read and write well. On biases that are built into AI. I mean there’s just like a whole lot of problems with AI that society is going to have to deal with, and I think we’re going to have to face some difficult questions over how to help people transition because I think there’s going to be a lot of economic displacement in the years to come as well, and it may not be that it plays out in the same way as tech has in the past.
Sharon D. Nelson: Well Greg, thank you so much for joining us today. There was tons of information in this interview, which was graded. I agree with you, it’s a very exciting time, I’m excited too. I feel your excitement. On the other hand, those guardrails seemed to me like a really needed thing and a good idea. So I guess we both agree on that, but honestly, it was just a wonderful conversation. Thanks for being with us.
Greg Siskind: And thank you for having me as a guest. I really enjoyed it.
Sharon D. Nelson: And that does it for this edition of The Digital Edge: Lawyers and Technology. Remember, you can subscribe to all of the editions of this podcast at legaltalknetwork.com or on Apple Podcast. And if you enjoyed our podcast, please rate us in Apple Podcast.
Jim Calloway: Thanks for joining us. Goodbye, Ms. Sharon.
Sharon D. Nelson: Happy trails, cowboy.
Outro: Thanks for listening to The Digital Edge, produced by the broadcast professionals at Legal Talk Network. Join Sharon Nelson and Jim Calloway for their next podcast covering the latest topic related to lawyers and technology. Subscribe to the RSS feed on legaltalknetwork.com or in iTunes.
Notify me when there’s a new episode!
The Digital Edge |
The Digital Edge, hosted by Sharon D. Nelson and Jim Calloway, covers the latest technology news, tips, and tools.