Thomas J. Hamilton is vice president of Strategy & operations at ROSS Intelligence in San Francisco, California. He was...
Gyi Tsakalakis founded AttorneySync because lawyers deserve better from their marketing people. As a non-practicing lawyer, Gyi is familiar...
Kelly Street is the Marketing Director at AttorneySync, a trusted legal digital marketing agency. With almost 10 years in...
How is artificial intelligence infiltrating the practice of law? Gyi Tsakalakis and Kelly Street welcome Thomas Hamilton for an examination of current AI tech and trends in the legal industry. From legal-specific to general business applications, they discuss the impact of AI and explain its value for lawyers. They also focus on dispelling fears associated with AI by addressing common myths and illustrating how this tech is becoming a necessary element of the modern law practice.
Thomas J. Hamilton is vice president of Strategy & operations at ROSS Intelligence in San Francisco, California.
Lunch Hour Legal Marketing
How AI Helps Lawyers Do Their Best Work
Gyi Tsakalakis: All right seriously though how do we start these things?
Kelly Street: We start then by saying hello to one another.
Gyi Tsakalakis: Hello one another.
Kelly Street: Hello one another.
Gyi Tsakalakis: I am starving.
Kelly Street: You are?
Gyi Tsakalakis: Yep.
Kelly Street: You haven’t eaten lunch yet, Gyi?
Gyi Tsakalakis: No well you scheduled this during lunch time so here we are.
Kelly Street: That’s right I did schedule it during lunch time, oopsies yeah, I missed my lunch too but I had a late breakfast.
Gyi Tsakalakis: Oh, what do you had for breakfast?
Kelly Street: I had a yummy breakfast sandwich from a local burger place that does breakfast Sammy’s in the morning. It was delightful.
Gyi Tsakalakis: Hmm, that sounds delightful.
Kelly Street: Yes. And for lunch I’m going to have some leftover Indian food that I didn’t finish yesterday.
Gyi Tsakalakis: What’s dish?
Kelly Street: Butter chicken and mild chicken with chicken Biryani rice.
Gyi Tsakalakis: Yum.
Kelly Street: Yeah what are you going to have for lunch now that it’s late?
Gyi Tsakalakis: I don’t know — I will get, maybe sushi today, I don’t know, but now after well people don’t realize this, but we’ve already recorded with our guests. So now I’ve got tacos on the mind.
Kelly Street: You do sound a little bit hangry.
Gyi Tsakalakis: Good.
Kelly Street: Yes.
Gyi Tsakalakis: I was actually hoping that our guest was going to come in hangry, but alas he did not.
Kelly Street: That is a weird request to make, but I am not surprised. Anyway.
Gyi Tsakalakis: Welcome to Lunch Hour Legal Marketing.
Kelly Street: Welcome. So there was a — there was a thing you wanted to talk about for a minute or two here and I believe that thing is the singularity.
Gyi Tsakalakis: Kelly, have you ever wondered what it would be like if there was superhuman intelligence that collectively was more intelligent than all of humankind put together?
Kelly Street: Yes, I have wondered about that. I also do sometimes wonder if we’re actually living in the Matrix, Tesla founder Elon Musk’s idea that we are potentially living in the Matrix.
Gyi Tsakalakis: Well I’m really glad you brought that up. I was going to ask you a question that I think I’ve asked you before.
Kelly Street: Okay, what’s that question?
Gyi Tsakalakis: Do you care whether or not the stake is real?
Kelly Street: Ah, you have asked me this before. Since I just re-watched the Matrix last week I now remember this scene, and I don’t care.
Gyi Tsakalakis: Okay.
Kelly Street: I really don’t.
Gyi Tsakalakis: Alright.
Kelly Street: Do you care? Do you care whether the stake is real?
Gyi Tsakalakis: I do, it’s not, there’s something weird about it. I don’t know what it is but I care, I am like and even if I knew — if I’m like we’re sitting there, we’ve been out of the Matrix, we go back in the Matrix and I’m like the real world is horrible, I’d rather be in the real world, I don’t know. I guess I’m a masochist.
Kelly Street: I think — I mean I think that is the key for me is that like if this is a simulation, it kind of sucks.
Gyi Tsakalakis: Right, good job machines.
Kelly Street: Yeah. Oh, garbage in garbage out as our guest will say.
Gyi Tsakalakis: Well done, great reference.
Kelly Street: Thank you. Speaking of garbage in, garbage out, that is not what you’re going to get from our guest today. He was awesome.
Gyi Tsakalakis: He really was, you’re in for a treat listeners, so strap in for Lunch Hour Legal Marketing.
Intro: Welcome to Lunch Hour Legal Marketing, with your hosts Gyi Tsakalakis and Kelly Street, teaching you how to promote market, and make fat stacks for your legal practice, here on Legal Talk Network.
Kelly Street: Alright Gyi, let’s get this Lunch Hour Legal Party started.
Gyi Tsakalakis: Yes.
Kelly Street: Yes that was not as enthusiastic as it really should have been.
Gyi Tsakalakis: Well in our — when we first got it on the line I was really enthusiastic and so I think I like let it all out. So I’m going to drink some more coffee.
Kelly Street: Yes get pumped up again.
Gyi Tsakalakis: I am pumped, let’s do this.
Kelly Street: I did — I did tell you that you were a little too loud and it hurt my ear, so —
Gyi Tsakalakis: I know.
Kelly Street: I’m sorry, I’m sorry audience, get pumped up again, because we are ready to talk to Thomas Hamilton.
Gyi Tsakalakis: There we go. We needed to kick this off.
Thomas J. Hamilton: I have an entire cup of coffee that I’m getting through right now. So I’ll send some energy your way.
Gyi Tsakalakis: Let’s just pound coffee for 45 minutes.
Kelly Street: Perfect, perfect I’ve got my Kombucha here which does have caffeine in it, so I will — I will be raising my caffeine levels with that. No, nobody else doesn’t Kombucha in this group?
Gyi Tsakalakis: I don’t.
Kelly Street: Anyway, hello Thomas Hamilton. How are you doing today?
Thomas J. Hamilton: I’m doing great. Like I said I’m getting caffeinated over here. I do not drink Kombucha. I am based in San Francisco so that makes that sort of like a testy subject, but I for one, am not currently on the Kombucha drink, have been the past, may return at some time, but right now I’m kicking old school some coffee, but I do have almond milk not normal milk inside it.
Kelly Street: There you go. So at least you are doing something a little bit San Franciscoy.
Thomas J. Hamilton: Exactly.
Kelly Street: So Thomas, will you tell our listeners just a little bit about yourself, kind of your brief bio of how you got to where you’re at now.
Thomas J. Hamilton: Yeah sure thing. Okay, let me — where do I start it? All right, well I’m a former attorney, I was with when my career, this seems dramatic my career ended. When I left practice I was with Dentons, which is still the largest law firm in the world, multinational, they are in basically every country at this point.
When I started my practice though I was actually hired into a substantially smaller firm, that was one of the first few founding firms that created Dentons, which then probably increased like fourfold from there.
So I was doing corporate commercial law, had a number of different clients, I was touching on a few different things. One of the areas I was involved though was tech law and it just so happened. I was in Toronto, and this is back in 2015 and Toronto at that time was going through the beginning of a really incredible sort of explosion of new artificial intelligence companies that all had essentially come out of AI research projects at a few key Canadian schools, that for a number of reasons that it won’t go into now, because it takes a while, but basically there was a few Canadian universities that had been able to attract and then keep some really fantastic AI talent, that now kind of heads up AI departments, had a lot of Silicon Valley’s major companies.
So being an attorney, doing a lot of work with tech clients in Toronto at that time I was exposed this cool very new AI ecosystem, kind of kept my ear to the ground and was meeting with a lot of different companies in space, one of them ended up being ROSS Intelligence, which is where I currently work. Their goal was to use breakthroughs in artificial intelligence specifically something called natural language processing, which is the ability to ask the software system complex questions like you’d ask it to another human expert and then have that software system give you really good online results.
They were playing around with it, they’re based in legal research which I was spending a fair amount of my time at the time doing, with some of the partners I worked for, so I understood sort of the pain point. I got what the product was trying to do. I was at a firm that was really experimenting with overhauling their business practices and their operations, because we’re going through these series of many, many mergers in a row. And lo and behold about 12 months later ROSS Intelligence move down to Silicon Valley, I had an opportunity for a sabbatical. I came down swung by their office a few times, fell in love with the project and I’ve ended up actually being in Silicon Valley ever since.
So that’s where it came down here. Now I split my time between our Toronto office and our Silicon Valley office, although I’m in the East Coast of Fairmont as well. Basically my job as the first employee is speaking to folks everywhere, anywhere about what the company does, but also I think more interestingly for this call and this conversation, what’s happening in AI, how is it already changing law firms, how will it continue to change law firms, and really kind of demystifying a lot of mysteries around artificial intelligence as an attorney who would I — did my career change, I really didn’t know that much about it. So I like to sort of just jump on these calls and like I said demystify some of the misconceptions about AI and explain how we already use it in our everyday lives.
Gyi Tsakalakis: Yes, this is going to be very fun. So let’s start with — we’re going to build on this, but we are going to start with I have no idea what AI is, I’ve seen some movies —
Thomas J. Hamilton: Oh boy.
Gyi Tsakalakis: I heard it’s going to take over and replace us like Ray Kurzweil like Singularity stuff.
Thomas J. Hamilton: Yeah.
Gyi Tsakalakis: So let’s hear from a human intelligence what are we talking about here with AI as it pertains to — well, let’s just start in general, let’s get it all out there, so that we can break it down into different versions of AI, but tell us about AI.
Thomas J. Hamilton: Yeah okay. So let’s start from like general principles. So basically AI — we will do the lawyer, the lawyer philosophy nerd thing here.
Gyi Tsakalakis: We knew you are the right person for this podcast, thank you.
Thomas J. Hamilton: You are putting a lot of pressure on me, Gyi. So the way it was explained to me when I first walked in the door fresh literally two days prior had been on my firm, so I just started my sabbatical, artificial intelligence was explained to me by the AI researchers that founded the company is, it’s software that can perform a task which we thought only a human could perform now.
Kelly Street: Oh that’s perfect.
Gyi Tsakalakis: Makes sense to me.
Thomas J. Hamilton: It’s simple, it is elegantly simple but it is sort of deceptively unpackable 10:16, because what we thought that only a human could do is a moving target both within our lifetime but very much so from generation to generation. So –
Gyi Tsakalakis: Spoiler alert, we’re not that special.
Thomas J. Hamilton: Exactly, exactly and it’s a double spoiler of what you think is special about you and defines you, is not what your grandchildren will think is special about them and defines them. In the same way that before the development of like the automobile or before the Industrial Revolution there was tasks that only a human could do that machines were then created that could sort of first aid them and then outperform them in, that didn’t mean the end of jobs, that didn’t mean the end of humans or communities, but it shifted our conception of like what makes us special.
And so, AI, the example I like to use often I’m speaking on this is because it’s accused up with like my aha moment, was you look at something like Siri or also vision recognition systems. So I remember I don’t know if any of you saw the movie, there’s a movie called Minority Report with Tom Cruise.
Gyi Tsakalakis: Yes.
Thomas J. Hamilton: Right, great movie.
Gyi Tsakalakis: Precogs.
Thomas J. Hamilton: And it was — was that like Isaac Asimov or someone maybe who wrote a book originally it was called Minority Report, I’m not sure.
Gyi Tsakalakis: Was it, I thought Asimov was the eye lawyer one, I don’t know who wrote Minority Report but anyway sorry.
Thomas J. Hamilton: So like, it was this example of — I remember seeing it, this is cool action movie with sort of that philosophical component right the Precogs and everything, but like one of the craziest parts was that he’s sort of walking around like no future LA or future New Yorkers, Neo Tokyo and like machines keep recognizing his face, right.
And like the idea that that was so far in the future that it synched up with a future where like we can psionically predict the future would that seem like a totally reasonable correlation to me; whereas now, like if you buy a new iPhone, it can recognize your face very, very well, right.
So that’s one of those examples, or Siri before that where the concept of speech recognition or the vision recognition systems performing at the 2019 level when I was growing up was literally science fiction. It was something you saw in a cool movie and I have two nephews and I guarantee you like by the time they’re 6 or 7 years old and they hear me talking to my brother-in-law, my sister about this, they’re going to be rolling their eyes. They’re definitely saying, it’s so boring that you think it’s cool that I can talk to my phone, obviously I can talk to my phone.
So that’s one of those fun examples of how AI is always a moving target and kind of evolves and means different things generation.
Gyi Tsakalakis: This is great.
Kelly Street: Yeah, okay. So man, I’m just – okay all right, I’m getting my head around this. I’m trying to — trying to wrap it all up in a tidy little bow. So when — I think the first thing since this is about client development and marketing, our Lunch Hour Legal Marketing Show, so there’s AI, how our law firms using it.
Obviously, with ROSS Intelligence, you guys are doing this for legal research. What are the other ways, how is AI infiltrating law firms?
Thomas J. Hamilton: So I think that’s a really interesting question and I think conceptually for me, I had to break down my understanding what AI was doing into the different types of law firms. And so, you always bring your own bias, based on what your background is. So for me, a lot of my friends initially were starting at large law firms.
And so that’s the bias I brought in. Large law firms benefited earlier from AI than small law firms, because they had the budget, because AI systems used to be very expensive, and B, because they had a team of technologists, full-time technologists, so Chief Information Officers, litigation practice support people, IT people who were going to the conferences, who were reading up on this stuff ten years ago.
Smaller law firms are a bit newer to the party, but one of this sort of the general like truisms you’ll hear about AI and not just in legal this is played out everywhere, is that generally speaking, AI is a — it levels the playing field for different businesses, because in providing you a software that does something that you used to need a human to do, a much smaller team can compete against a much larger team.
And generally, that’s one of the biggest difference between law firms. It’s really just a function of the total number of lawyers.
So I would say big law firms, the first stuff they moved into was expert systems and then sort of playing around a little bit with kind of like e-discovery and contract analysis, but bespoke systems, so expensive, upfront to pay for it, and then really expensive and time intensive to build out. Like it was, you couldn’t just unpack the software, we had to kind of build it yourself and training yourself.
The second wave which we’re a part of is, systems that are kind of plug and play right outside of the box and those touch on really anything a law firm does that any other business does. So like I’ll give you a couple of simple examples, email marketing automation systems, email auto completion systems, advanced CRM, so Customer Resource Management Systems that could basically start finding correlations between different groups of customers you have.
Things as practical as basically AI receptionists or AI assistants for scheduling for even client intakes, someone like Smith.ai that does an incredible job with that. So it really runs the gamut and then of course legal research on our end, but it’s — some of it is legal specific but to be honest, most of it and what this is really cool because this is unusual for law as a vertical.
Most of it is just good AI tech that’s easy to use, it’s affordable that you can trial for free. It just helps you run like a small business better and it helps a law firm in the same way that my parents when they were working out of our home as consultants, they would have benefited from the exact same technology.
Kelly Street: Got it, okay I like that, because that was my next, my follow-up was you’ve got these different AI things, how do you market that as a service like or does it even make sense to market that you’re using AI for different things to potential clients because what is a potential client going to get out of the fact that you’re using AI.
But I think that – putting it into that context it makes sense because you, as you said, you have access to things that larger law firms or more money power manpower would have now using these different tools.
Thomas J. Hamilton: Yeah, I think the way that I wrap my head around it was, so there was all this research going on for decades about what AI could do theoretically, and then sort of in the last like 10 years computers and data sets have caught up which means that now those theories can go into practice.
So what five years ago, you had the beginning of these theories being able to be like proven out in the real world but it wasn’t mainstream yet, people didn’t know about them and there’s a lot of hype about AI, and that’s right around when we started and we experienced that ourselves.
I think when you fast-forward to 2019, most applications of AI are generally like you can just interpret as much better software, a much better version of software you already use that can start to perform very specific tasks really, really well. But they’re good at one or two things and so that’s why I say it’s almost just like a second software revolution.
And I think in terms of marketing that to your clients in a lot of ways, because these AI tools just help your business run more efficiently and in many different narrow ways like helping you with scheduling, right, or helping you with client intake or helping you have really fantastic nurture campaigns to different demographics on a cool newsletter you write on immigration law.
I think that all the client sees is like wow, I like my law firm more. Wow Kelly is such an attentive lawyer and that’s almost a proof that it works is that behind the scenes, there’s a lot of extraordinarily complex algorithms that play but like for the client it’s often fairly simple and I think even when you go into something more legal specific like legal research, the tools that are created now the beauty of them is that they’re built around modern design methodology.
So the companies building them while the technology is only for lawyers to use, the companies building this tech in 2019 are like they’re made of world leading machine learning scientists, but also incredible designers, incredible customer success people, fantastic product owners and they all come from very rich backgrounds in tech and other fields and their whole thing, almost like Steve Jobs did with Apple, really their art or their magic is making credibly powerful software that’s really easy for a lawyer to use.
So sometimes the client and even the lawyer themselves doesn’t fully see it right off the bat.
Gyi Tsakalakis: Right, so now we’ve got a — you’ve helped us understand at least the starting point of what AI is and how it’s impacting at least some practices today. So now I’m a lawyer, I’ve heard this and now the fear sets in, because I hear you say efficiency, I hear well I spend my day doing e-discovery, or I spend my day doing legal research.
Talk to us about how or maybe you say justified fear because maybe you think there’s going to be a massive consolidation in legal.
But tell us, tell listeners who are lawyers who hear about this AI stuff and start going down the fear train, about how it’s taking their job, and it’s the end of lawyers, we had to at least mention that or else people are going to say we never talked about how it’s going to be replace us?
Thomas J. Hamilton: Yeah, so I would say that all the technology, all the AI technology currently available – no, I am going to take that back, the best AI technology available even though the most incredible stuff out there for lawyers right now can’t replace you whatsoever. It can help you do specific tasks much better, and that can sound scary, it sounds scary to me when I first heard about it.
But think back 10 years, think back 20 years, there’s been multiple times where a new software has come out or new technology has come out that has dramatically increased efficiency. So like I would posit that the arrival of email was a much bigger game changer, much faster than some of the AI techs we’re seeing in terms of the speed with which you can communicate A, between two different lawyers; B, between your client, and yourself. The arrival of the Internet generally, even the arrival of cheap personal computing.
All these things were major changes and huge efficiency improvements what a lawyer did but the core of what a lawyer does in terms of a professional like an architect, an engineer is basically applying very complex abstract problem-solving to a number of different issues that’s never going to go away.
But look at the same time, there are some tasks that if you’re used to spending in the 80s, if your firm purely did discovery, you ran your own firm, you’re sole practitioner, you only do discovery, you have a thriving practice. When e-discovery came out, you could not run the same old business model. Now, you could change your business model and apply all that old experience to discovery using e-discovery but there are some things that evolve but that’s at the end of the day, that’s always been the case in law.
I think when you look at something like legal research, part of what attracted us to it especially because a number of us were former attorneys was that we knew that there’s been a trend going on for years and years, where legal research used to be something you could always bill for, and that’s harder than harder to do. And in fact, in some practice areas, you don’t even try and bill your client for it, you’ve just completely moved to fixed fee.
That’s typically where we see the fastest adoption of our tool both proficiency and because it increases the accuracy of what you do. But there’s no question that there’s going to be change, e-discovery was the first kind of big wave of it and that’ll continue but that’s nothing new in law, and I think that small or mid-size firms like I was saying, they stand to benefit the most from AI. AI across every industry, you see it again, again it’s a great equalizer.
And especially the entrepreneurial firms that jump to it first are really going to see huge, huge gains. And as students graduate from law schools, something that we’ve seen a sea change and it’s really incredible to see over the last four years since we started out is law schools themselves are starting to push this technological competency, for our associations are as well even just look at ABA Model Rule 1.1. But what that means is that the new generation of lawyers is graduating, is already going to be trained on a lot of legal technology out there that didn’t used to be expected as mandatory of the use.
And that means the profession is evolving in line with the institutions that create those professionals.
Gyi Tsakalakis: Right very well said, very well handled, I mean very much alignment in that and I think the other thing too is we and we touched on this before, but when we’re talking about some of the ways that it impacts client development. Clients are wising up right so they expect you to be using this stuff that helps you have efficiency gains, it’s a competitive advantage like you said I think it’s going to impact the way people deliver legal services and the business model in general.
And then there’s a bunch of other ways that don’t relate to actually just communicating to clients about their competitive advantages of using AI and I think you touched on this a little bit too. But it’s going to help you identify new segments of your customer base through CRM data, it’s going to help you if you’re advertising online, it’s obviously already impacting programmatic bidding, and so, and obviously, search engines are using this stuff.
So at a very minimum level we can say that both in terms of the competitive advantage that’s facing your next clients as well as, how you’re actually understanding who your audience is, who are your most profitable clients, AI is going to play a big role in the client development aspect of the practice on both of those, at least both of those instances.
Thomas J. Hamilton: Yeah and something I would add on that is that in many ways so most of the legal technology that’s coming out right now is it’s for the lawyers to use right so it’s like its business-to-business. But there is also a part of the industry which is consumer facing, it doesn’t get as much press, but like a very simple example would be like the Lawstore through Walmart right, where they’re selling consumer facing legal services but down the line they’re going to increasingly bring more technology to play there.
And the best example of this is Google. In a lot of ways, the technology available for the consumer of legal services has advanced faster than for the provider of legal services. So like something that we hear all the time that really resonates with folks to use our technology is they say look, before Google, a client could call me with a legal research question or a client could come into my office with legal research question, and they would know very little about it.
I really had access to the information, I had the training, I had the JD on the wall. But with Google, it takes 30 seconds for a client to type into question to Google first and they then call you as the lawyer. And your job is to be that trusted advisor and they already actually know more than you because the consumer-facing side of legal has advanced faster than the like business facing.
And so, I think what we see playing out now is that it’s almost like these AI companies are now addressing the need that lawyers have, because clients are more and more savvy in their consumption of legal services. And so, now lawyers almost want that same level of ease of use that same power but the ability to bring all their knowledge and training to bear.
And that’s what you see in the creation of these AI systems that just like the clients’ expectations have leveled up, lawyers are now able to level up what they can provide immediately following a client consultation or even with the clients still in your office, you’re able to jump onto software and provide much more value much faster, which is how you win new clients and keep your current clients happy.
Gyi Tsakalakis: Love it. Bravo.
Kelly Street: Yes, yes. So I have another kind of fear that I want to bring up or something that I hear, and this is with all new or emerging technologies that people felt this way about email too. And to some degree maybe should about people reading their emails or who’s reading their emails.
But is the ethical concern of AI in regard to legal. So there are different AI technologies that there’s one that I heard about where it’s a new company that reads your case documents and if there are dates, if there are reminders that need to happen in those documents, they will put them on your calendar or just kind of like hey check in about this again that sort of thing.
And I know there are some ethical concerns about that and sort of the like oh, what — it’s not the lawyer that’s reading this document. So how would you respond about what are ethics around using this new technology of AI?
Thomas J. Hamilton: So we get asked about this a lot and the dialogue has I think evolved quite a bit and matured as well over the last four years. I think the way — I’m going to try like a new metaphor here because it kind of just occurred to me, and then we can unpack it but I would say AI and the ethical issues around AI are like, it’s like when I was a summer student at my law firm.
So a lawyer at the law firm was ultimately responsible for any work that they gave to a client, and that means that if they had me do something as a summer student and then just sent it blindly to a client, and then there was issues that was bad, and now they reflect on the lawyers. And people would question out lawyer’s judgment, right.
But at the same time, if there was a client matter that required 40 hours or 80 hours of like totally down the rabbit hole, I got my earphones in, no one’s allowed to come into my office, total concentration work, and there’s no conceivable way with that lawyer’s workload that they could ever like in the next 10 years carve out 80 hours back-to-back to do that super intensive me on the haystack search.
Well if that lawyer did not farm that work out to me as a summer associate to put in that first 80 hours to then put something in nice package, leave it on their desk in duplicate highlighted up tabbed perfectly, then they could spend time on reviewing and in spending the hour and a half they had available really help their client.
Well similarly, that lawyer’s judgment we called in question if they literally refused to use the summer associate who’s the only person with that 80 hour capacity, and I would say that that’s probably a good way to think about the ethical considerations around AI, because passing off all our responsibilities to AI is totally unacceptable, but refusing to use AI and in doing so, doing a worse job than you could have had you leveraged AI is equally ethically unresponsible.
Kelly Street: Yes, and you — so we had a chat before this, but then I’ve also heard since then I had other conversations about this similar thing, but you told me about there have been cases where lawyers did not use — refused to use email and were then sued by clients to get their fees back.
Thomas J. Hamilton: Yeah, yeah, I think you see all these historical examples where look as attorneys, we have to follow the rules right. If we don’t follow the rules, no one will.
So often a precedent has to be set that allows sort of for us to use new technology in a different way and it’s unfortunate that sometimes it requires basically an attorney being disciplined, but you saw examples of that with email before where you have very senior attorneys that had a great relationship with a client, it could be a Fortune 500 client whatever it is, but as they progressed in their career and they refused to use email whatever, basically the quality of their legal services that they provide and no matter how brilliant they are, no matter how great the relationship is, became really, really bad because they weren’t using email and the rest of the world was.
You also see the same example with e-discovery, right, where if you refuse to use e-discovery technology now, but you have a massive client matter involving law discovery, the court is not going to reward all your costs, because it’s just — it’s not keeping up the technology anymore.
I mean you can imagine similar thing if someone refused to use personal computing, but these were all technologies and they came out that had ethical issues around them and totally valid ethical concerns, but as you address them, and as the professional gets more comfortable with them, in fact sort of the onus actually shifts and you’re required to use them.
Once again, it goes back in a lot of ways to ABA Model Rule 1.1, but a lot of different state bars have adopted that as well in terms of the duty of technological compliance. And I think we are entering the world where AI is starting to be viewed within those narrow applications like just doing legal research, just doing doc review.
We’re entering a world where AI is viewed in a similar way and you do on our end, I spent most of my time for months on the road at different conferences, different events, and I speak with many, many members of the judiciary and they’ll say the same thing.
Officially, this stuff is just starting to enter the mainstream, we’re doing pilot projects with different groups. We absolutely have judges that they use our platform, but there is a feeling that fairly soon, it’ll move from a hey, you’re using this cutting-edge new technology, which I’m okay with, but walk me through it to how are you going to tell me you spent 30 hours of legal research that you’re billing this poor client for, when I’m pretty sure if you’d use some of the other tech out, there were taking you two hours and you also wouldn’t have missed that case.
Gyi Tsakalakis: Right. This is super fun. So I have another slightly separate ethics rabbit hole to go down.
Thomas J. Hamilton: Yeah sure.
Gyi Tsakalakis: And I don’t — I was thinking about this and I don’t know how much this applies to legal research, so you might say yeah, Bobby, I want to be presumptuous, let us kind of take us down the path and then you can tell us more being the expert. So it turns out that another thing that is not limited to human beings is bias, and so algorithms are biased. They’re biased by the data they take in and a bunch of other things.
Kelly Street: I love this rabbit hole Gyi.
Gyi Tsakalakis: Great. Talk to us — talk to us about the bias of algorithms.
Thomas J. Hamilton: Yes, okay. So I actually just wrote a paper on this, and I will –
Gyi Tsakalakis: I did not know that, so this was not a softball.
Thomas J. Hamilton: Yeah, this is like the — like the professors lecturing on some subject and then like the 2L student just came back from summering, like at the United Nations initiative on that exact thing. So okay, so the issue with bias in AI systems is very real, and the best way to sum it up is garbage in garbage out.
So an AI system, training an AI system learns through process called machine learning, which means it’s able to learn how to do tasks, that was not explicitly instructed to do. And that’s the power of an AI system is it can learn how to do things that we don’t actually know how to explain to do.
So a simple example, an infant can easily tell the difference between, I don’t know a dog and a cat, but it’s kind of difficult to actually explain how you explain to that infant to do that, and the infant certainly can’t explain how it knows, it can just figure it out right.
And then, as you grow, you’re able to learn how to do increasingly complex things that we actually as humans and as developers on your science side don’t actually really know how to explain like to a computer how to do.
So what you — basically you rely in different methods of training an AI system. They’re both supervised learning and unsupervised learning. I won’t go too deep in this rabbit hole, because that can get really complicated, but basically just like an infant, if you receive poor mentorship when you’re grown up or very poor instruction or training, you’re not going to be able to perform the tasks you were trained on.
Well, you’re going to have weird theories on them, weird ideas and weird biases, so AI at an extremely general level and I’m going to get beaten up by our engineers simplifying it this much, but basically AI works the same way. So bias in AI is a reflection of a bias into the training that AI receives.
Now let’s break that down into like some legal examples. So recently there has been a couple examples of large financial institutions creating their own AI systems to automate business/legal processes. So there was a major American bank, I think it’s Wells Fargo but I could be wrong on this, you don’t quote me on that, but anyways a couple years ago basically what they said is listen, we have a huge repository of loan approval forms, loan approval applications, like hundreds and hundreds of thousands of them and a bunch of data warehouses, right.
We have a huge budget, what if we just had a bunch of people scan these documents, tag them with all the relevant information, so Thomas Hamilton applied for a mortgage of this value in this city on this year, he has this income, he has this number of co-dependents, dada, da da da. And then, basically usually we have humans do this stuff, but humans are biased in approving loan applications, so let’s make an AI, it’ll be perfect.
And then what ends up happening will lo and behold, you trained your AI on loan applications that were evaluated by humans. So you actually, you created an AI system that has all the same biases as your human counterparts, and in fact is even worse than the human counterparts, because it’s unaware of the bias. It’s just doing the thing you trained it how to do.
So that’s a very simplistic example of garbage in garbage out, but it highlights A, the risk of having bad training data and the importance of having AI systems that just do one thing really well, but believe a human in charge as opposed to have an AI system that automates an entire process start to finish, like loan applications, because then you get some really, really frightening sort of applications of the bias.
I would say one that moves — it’s almost like to address the elephant in the room right, because there is these fears and sort of theories with what AI could do in a nightmare situation. One of the things that is frightening is AI systems for auto sentencing, right, or totally automated dispute resolution based off of training data compiled from decisions in the past.
And while on the one hand that is exciting, that is an attractive idea, because the courts are overburdened, A, B, lawyers are very expensive, but if you just had an AI that could do it than anyone could afford a lawyer, but the problem is that if you trained an AI system to automatically decide the outcome of a court decision, based off of say the last hundred years worth of case law, well judges have biases as well, right.
So once again you face that garbage in garbage out issue, and that’s why right now we’re in the era of that narrow application of AI. It can do one thing extremely well, but then a human has to interpret that information as opposed to the let’s just train the AI and it’s kind of a black box, and it just spits out guilty, not guilty, maybe kind of guilty.
Kelly Street: Ooh, the sentencing via AI is definitely needs to be made into some sort of a movie if they don’t already have one that does that, because that is — that is a little bit scary.
Gyi Tsakalakis: Yeah and I’m not certain, that was great. I appreciate you entertaining this issue, because I do think it’s a big one especially even lawyers that might not be ready to cope with the recognition of their own biases. They’re all over this, like well there’s bias in the algorithms and I’m not smart enough to answer this question, but I think a big question is based on what we’re doing now we know now is it even possible to eliminate the garbage in, I don’t know.
Thomas J. Hamilton: Yeah, I think that the garbage in issue will always exist and it –
Gyi Tsakalakis: Mitigating it.
Thomas J. Hamilton: Yeah exactly, it’s mitigating it and it’s being humble about what that constraint means in terms of what you should apply AI to. I guess is sort of how I would describe it, because there’s some stuff that it’s great for, there’s some stuff that it isn’t and I mean that’s true of any new wave of technology, right.
It’s right upfront, it’s marketed as oh my God, I don’t know. These net will change literally everything, and it changed a lot of stuff, but some stuff it didn’t change at all to be honest, right, or I don’t know. The arrival of rideshare apps will totally change everything. Well, it certainly changed the taxi industry but didn’t change a lot of other things.
And I think it’s sort of the same with a garbage in garbage out problem, because it’s just sort of an inescapable issue.
One of the things I would say that it’s been interesting to see and especially having a number of lawyers on our team, is that I think sometimes as lawyers, we fall into the trap of thinking well unless this new thing is perfect, I should continue with the status quo, which assumes that the status quo must be perfect or at least so close to perfect, it’s not we’re switching to the new thing.
And like an example that that I would use that always resonates with our engineers and we’re training them is like try doing legal research using Boolean logic on a few different search systems, and you literally won’t get the same cases. So that’s already bias in there, right, and that’s well documented.
There’s tons of research papers on it like Susan Nevelow Mart had a fantastic paper on that a few years ago, she is the law librarian. So A, you’ll see the bias there, but B, I think more importantly like I don’t think if you spend four hours doing Boolean research, you’re going to come away from that and say this is a perfect process that doesn’t need to be improved on.
Gyi Tsakalakis: Exactly.
Thomas J. Hamilton: I think she would say this is a frustrating process that’s very inefficient where I spend a bunch of time on it, I honestly don’t actually know if I have the right answer for my client. And so, there’s a bit of like an opportunity cost fallacy, but that’s where — that’s on the technologists, that’s on the people making the AI software to make the tech really easy to use, really easy to understand.
So someone who is used to doing something even though they know deep down like oh process kind of sucks, it’s not very efficient, but they’re used to doing it, they’ve been trained on it for years. It’s on us to make tech that’s so easy to use and so good that you still try it and you still like it even though you’ve been doing it a different inefficient way for 20 years.
Gyi Tsakalakis: Totally. And the example that always comes in my mind to it people that are like, oh the autonomous vehicle got an accident, shut it down, and it’s like well, it’s actually doing way better than we do it driving.
Thomas J. Hamilton: Yes exactly. Yeah, it’s totally true and I think it’s a funny thing where as lawyers someone said this to me once, I was at this Legal Tech Panel three years ago and they said as lawyers, typically what we do is upfront because usually we’re the calm one in the room, we’re the very studied one in the room.
So when something new pops-up upfront we tend to not pay a ton of attention, but then when we decide to pay it attention, we almost go all the way the other way, like it’s kind of a feast or famine and sometimes I think that plays into our views on technology of okay, I don’t give — I couldn’t care at all about AI whatever to oh my God, AI is going to change everything and it’s going to be so powerful and then when it doesn’t match that expectation, you almost like discredit it.
Self-driving car is a perfect example. Oh my God, a person died in a self-driving car crash and that’s horrible, right, that’s a tragedy, but like you said, I mean how many people even you just per capita relative number self-driving car, it’s like how many Americans die per year because of a human crashing a car.
Gyi Tsakalakis: Right.
Kelly Street: Yeah, I think that’s a really important mindset to have is not the swinging one way or the other, but what are the bits of AI that makes sense for me to adopt and try out and see as they evolve, how you can utilize them a little bit more?
Gyi Tsakalakis: Definitely.
Kelly Street: If we have a few more minutes, I did want to know one more question from you at least that I have Thomas, and that is what as far as kind of legal technology AI is going now and I’m sure you would probably started a business of this if you had like a golden ticket idea, but is there anything as far as AI and how it relates to legal that you think is missing right now?
Are there any areas where AI could come into play over the next year or two years, five years?
Thomas J. Hamilton: I think that the biggest thing like in terms of a dimension of law overall, I think the biggest thing is access to justice. What we’ve seen is that four years ago access to justice groups or pro bono groups, legal aid groups or one of the first groups to adopt AI technology, because their mission is really simpler, right just help as many people in need as possible.
And so if you had a tool that could help them do the same job in less time, they’re the first to use it, but I think that we’ve seen — you see a lot of the constraints that those groups face in terms of funding, in terms of staffing and I think that the next wave of AI, so if it started with like a lot of consumer facing and now it’s a lot of business-to-business, I think the next one is almost another wave of consumer facing where you have companies that create tech designed to dramatically streamline a legal process but for a consumer.
So like that can basically auto-resolve your parking tickets or automatically file your personal bankruptcy. I think that is — we’re already starting to see that like in our network. I think those companies are only going to kind of continue to grow from the perspective of a small and mid-sized firm, what I think is going to happen in the next year or two is the tools that sort of have been talked about a lot for the last few years are going to — you’re going to start seeing them spread very quickly away from just the big law firms that were their first users to really starting to like be you just across the country and a lot of different markets and you’re going to see some really cool partnerships in the space.
So an example on our end is Jack Newton, he was a co-founder and CEO of Clio, which is obviously a very large, very successful legal tech company in the space within CRM and in terms of legal practice management. Jack has actually joined our board. So he’s been interested in AI from very early on, actually from before even starting Clio, and that’s just one example I cite to say that you’re going to start seeing is very exciting cool partnerships between some more established legal technology companies and then some of the AI companies that — I shouldn’t say AI companies, because the software companies incorporate AI as well.
But some of the junior companies that started purely as AI plays that are really good at doing one or two things that you’re always going to see the creation of ecosystems of legal technology companies in a way we’ve never seen before. They’re going to do a really good job at providing kind of a suite of different solutions, different technology use, that’s all just designed to help you at every stage of running your law firm.
Kelly Street: Gyi, I think that might have been a prediction that you have made –
Gyi Tsakalakis: Nope, it was a great one.
Kelly Street: Partnerships and collaborations.
Gyi Tsakalakis: Yes no, this is great stuff. I’m glad that you brought that up, I think that’s a really important point. I have some really hard-hitting questions to ask you as we wrap this up.
Thomas J. Hamilton: Lay it on me.
Gyi Tsakalakis: All right, question number one.
Thomas J. Hamilton: Favorite last words.
Gyi Tsakalakis: Yeah favorite, this is Lunch Hour Legal Marketing, favorite lunch spots in San Fran and Toronto.
Thomas J. Hamilton: Okay, it’s the simplest questions I ever heard, okay.
Gyi Tsakalakis: Wait for question three.
Thomas J. Hamilton: All right, Toronto, okay so when I was in Toronto I was doing classic junior attorney hours, so I was not straying too far from the office, let me think, there was a great spot on Spadina called Pho Hung which I very hardly recommend, if you can get to it. If you’re right downtown, it’s like kind of a long walk but it is excellent, right, by the office — oh this is easy and this is a crowd pleaser.
Toronto Street meat controversial announcement it is the best street meat I’ve ever had and New York City Street meat is not good. I don’t know what the deal is about that. Toronto Polish sausage street meat $3 Canadian, best guilty lunch food.
Gyi Tsakalakis: There we go.
Thomas J. Hamilton: San Francisco, San Francisco, San Francisco, I find has better cheaper like healthy lunch food than Toronto, even though San Francisco gets a bad rep for being overpriced which it does deserve, but it has good affordable lunch food. I’m looking out my window right now where do I typically go –
Kelly Street: I think Gyi is hoping for a taco place.
Thomas J. Hamilton: Oh okay, easy.
Kelly Street: If you got a taco place at your repertoire.
Thomas J. Hamilton: Okay. So best tacos, Taqueria Vallarta 24th Street, right across the street from El Farolito, which has the best Quesadillas, but best tacos, hands-down like no like get out of here if you even try and disagree, Taqueria Vallarta, $2.75 each easily the best, still the best.
Gyi Tsakalakis: Yes.
Kelly Street: I’ve been to the other one that El Farolito, they have a couple locations I think.
Thomas J. Hamilton: Yeah.
Gyi Tsakalakis: Okay, thank you for handling that one. Question two, do you really wear a hoodie and flip-flops to work?
Thomas J. Hamilton: Okay. So I wasn’t totally sure if last minute you were going to do a fun surprise, where like you would say we’re going to do a videocast, because sometimes that happens in my life.
So I’m wearing —
Gyi Tsakalakis: We can.
Thomas J. Hamilton: Well if we do, you’ll see, I’m wearing a collared, like I guess colored sweater, and I’m wearing like loafers like, like deck shoes, that’s as dressed up as I get. And I would say, I’m at, last week I was two for five, I think it was Monday and Tuesday I wore a hoodie, so it varies, but yes, that is a real thing and to be honest when I go to a Toronto engineering office and certainly when I go to the East Coast, I have to like consciously at this point, pack a little bit differently, which is weird to say as an attorney who was used to work meaning a suit and tie every single day.
Gyi Tsakalakis: Right. I was really hoping you’re going to say you wore a collared hoodie because I’ve not seen you like that, and you might be honest something there.
Thomas J. Hamilton: I like that. That’s a nice move. I would say sometimes I do the Patagonia zip up, which for Silicon Valley is sort of the suit I would say.
Gyi Tsakalakis: Okay.
Kelly Street: So the Silicon Valley Suit, Patagonia zip up.
Thomas J. Hamilton: Yeah, and why that is I have literally no idea, but it’s a thing and 2019, it’s all about the gray Patagonia zip up, also no idea, and the extra points move is Gray Patagonia Zip Up Vest, it’s like sleeveless jacket, not sure what.
Kelly Street: Yes, yeah, well because you know your arms get hot, right?
Thomas J. Hamilton: Exactly. Yeah the arms get warm, I don’t know. I really don’t know.
Gyi Tsakalakis: This is amazing. Now question number three.
Thomas J. Hamilton: Yes.
Gyi Tsakalakis: And I have to — you might not answer this one, we might have to talk about this in the future at some point, because Arruda when I asked him this on an Ask Me Anything on Reddit, I think he’s still dodging, so you can bring this up with him. I haven’t checked, maybe, maybe he answered it, but I honestly didn’t check today, but I want you to, you have a lot of smart people over there at ROSS, it’s ROSS Intelligence after all.
Thomas J. Hamilton: Thank you, yes.
Gyi Tsakalakis: And you can ask this around the office too, in case you can’t respond to it right now. All right, are you familiar with the Fermi Paradox?
Thomas J. Hamilton: The Fermi Paradox I’m not going to –
Kelly Street: I’m, I am –
Thomas J. Hamilton: I am not going to cheat and buy time while I Google it.
Gyi Tsakalakis: Okay, well –
Thomas J. Hamilton: I am not familiar.
Gyi Tsakalakis: Perfect. So the question is, you’re not going to be able to answer now, because you have to go familiarize yourself and I’ll send you the link to — from wait but why to read, the question is are we rare, are we first, or are f, I have to say that so we don’t to put the explicit lyrics warning on this podcast, and we will come, we’ll check back in with you at some point in the future to get a response.
Thomas J. Hamilton: I really like this and this is almost like it’s almost like that scene in Godfather where some day and this day may never come.
Gyi Tsakalakis: Yes, this is exactly what this is.
Thomas J. Hamilton: At any moment I could be asked and who knows, he’ll ask me, it could be like, I just — I’m getting my Polish sausage in Toronto, I’m getting my tacos in San Francisco and someone just looks up and asked me the Fermi Paradox again, and then I’ll know it was both of you.
Gyi Tsakalakis: Yes, you’ll be ready though.
Thomas J. Hamilton: I will be ready. That’s, that’s my pledge. If there’s one thing I can pledge in this hour, it’s I will be ready for when the Fermi Paradox person or people ask me the question again wherever it may be on earth or I guess in space too, I should add.
Gyi Tsakalakis: Or in cyberspace?
Thomas J. Hamilton: Or yes, perhaps most likely in cyberspace come to think of it.
Gyi Tsakalakis: Thomas Hamilton, thank you so much for joining us today and indulging us and educating us.
Thomas J. Hamilton: Well, thank you so much for having me, it was a real pleasure and always, always really fun to talk about this stuff to folks that are in the space, that care about it, and it’s really just the beginning of what’s going to be possible with AI.
So anyone who is hearing about the stuff for the first time, stay tuned, stay interested, really it’s never been a more interesting or exciting time especially small and mid-sized firms in terms of what AI is going to make possible for you. I wish I was starting my legal career right now, I was in law school right now because of how much this stuff is going to change our profession for the better.
Kelly Street: Awesome.
Gyi Tsakalakis: The first person to say they wish they were in law school in a long time.
Thomas J. Hamilton: Don’t tell that to our law student intern, because she’s about to go into 2L.
Kelly Street: Yes, thank you Thomas.
Gyi Tsakalakis: Best friends forever I think.
Thomas J. Hamilton: Done.
Kelly Street: Awesome. And thank you so much to our listeners. Be sure to like and subscribe and rate us on Apple podcasts and thank you so much.
Outro: Thank you for listening to Lunch Hour Legal Marketing. If you would like more information about what you heard today, please visit legaltalknetwork.com. Subscribe via Apple Podcasts and RSS. Follow Legal Talk Network on Twitter, Facebook, LinkedIn and Instagram and/or download the free app from Legal Talk Network in Google Play and iTunes.
The views expressed by the participants of this program are their own, and do not represent the views of, nor are they endorsed by Legal Talk Network, its officers, directors, employees, agents, representatives, shareholders, or subsidiaries. None of the content should be considered legal advice. As always, consult a lawyer.
The podcast version of this free webinar featuring nationally known legal experts discussing a variety of marketing topics.
Paul Faust explains the benefits of branded phone numbers for law firms.
Casey Cheshire shares strategies for successful marketing automation.
Katy Goshtasbi shares strategies for identifying and utilizing your strengths to grow your legal business.
Gyi Tsakalakis and Kelly Street answer listener questions in this special mailbag episode.
Traci Reuter offers essential strategies for social media marketing for lawyers.
Thomas Hamilton examines AI trends in the legal industry and dispels many myths surrounding AI tech.