How can access to justice technologies be developed through a game? In this episode of Digital Edge, hosts Sharon Nelson and Jim Calloway talk to David Colarusso about the Learned Hands project. Learned Hands is a game in which lawyers spot possible legal issues in real people’s stories about their problems. This machine learning game identifies text classifiers for developing a new taxonomy that can be used to connect people with public legal help resources.
David Colarusso is the Director of Suffolk University Law School’s Legal Innovation and Technology Lab.
Special thanks to our sponsors, ServeNow, Scorpion, Answer1, and Clio.
The Digital Edge
Using Gamification of Access to Justice to Train Artificial Intelligence
Intro: Welcome to The Digital Edge with Sharon Nelson and Jim Calloway, your hosts, both legal technologists, authors and lecturers, invite industry professionals to discuss a new topic related to lawyers and technology. You are listening to Legal Talk Network.
Sharon D. Nelson: Welcome to the 133rd edition of The Digital Edge: Lawyers and Technology. We are glad to have you with us.
I am Sharon Nelson, President of Sensei Enterprises, an information technology, cybersecurity and digital forensics firm in Fairfax, Virginia.
Jim Calloway: And I am Jim Calloway, Director of The Oklahoma Bar Association’s Management Assistance Program. Today, our topic is Using Gamification of Access to Justice to Train Artificial Intelligence.
Sharon D. Nelson: Before we get started we would like to thank our sponsors.
Thanks to our sponsor Clio. Clio’s cloud-based practice management software makes it easy to manage your law firm from intake to invoice. Try it for free at clio.com. That’s clio.com.
Thank you to Answer 1, a leading virtual receptionist and answering services provider for lawyers. You can find out more by giving them a call at 800 Answer 1 or online at answer1.com.
Jim Calloway: Thanks to Scorpion. Scorpion sets the standard for law firm online marketing with proven campaign strategies to get attorneys better cases from the Internet. Partner with Scorpion to get an award-winning website and ROI positive marketing programs today. Visit scorpionlegal.com/podcast.
Thanks also to ServeNow, a nationwide network of trusted prescreened process servers. Work with the most professional process servers who have experience with high volume serves, embrace technology, and understand the litigation process. Visit serve-now.com to learn more.
We are very pleased to have as our guest David Colarusso who is the Director of Suffolk University Law School’s Legal Innovation and Technology Lab. An attorney and educator by training he has worked as a public defender, data scientist, software engineer and high school physics teacher. He is the author of a programming language for lawyers QnA Markup, an award-winning legal hacker, ABA Legal Rebel and Fastcase 50 honoree.
So thanks for joining us today, David.
David Colarusso: Thanks Jim and Sharon for having me.
Sharon D. Nelson: Well David, start by telling us if you would a little bit about the Learned Hands Project and I love that name at Suffolk Law School and how you came to be involved in the project?
David Colarusso: Yeah well the most important thing obviously is the name there, because you have to have some word playing. This all came out of a hallway conversation at — not this year’s, but last year’s what used to be called the TIG Conference, so the Legal Services Corporations Innovations in Technology Conference.
I was having a hallway conversation with Margaret Hagan from the Legal Design Lab Stanford and she was sharing some of her work she was doing with taxonomies of legal issues, trying to figure out a common way to represent legal issues to help people be able to find them online.
And someone from Microsoft came by and they were talking about how they were having a hard time building some text classifiers to be able to take people’s questions and sort of label them and then it was just this sort of combination of events I said, well you know what, you’re having a problem, that’s probably because you need more training data and we had just seen a talk by someone at reddit now one of the moderators and our legal advice talking about how to get all these questions from people.
Now you can talk about you want about — the answers they get on reddit, but there were real questions from real people with real issues, and reddit called itself the front page of the internet, so these were things that people were expecting to be out there for all the world to see, and the moderators and actually the policies on the forum make sure people scrub them of personally identifying information.
So I said — what we should do, we should just go get those questions from the reddit folks and then we can use that as data that we can then label, which just make a game and we have people label that data with the taxonomy that you’re putting together Margaret, and you can easily train on machine learning algorithm to be able to spot those issues and then that solves your problem there, folks who are having problem building a classifier. And that was the seed and so we went from there and did just that.
Sharon D. Nelson: It sounds so simple but it’s pure genius.
David Colarusso: So it really comes down to the fact that there just wasn’t a good data set out there that you could train machine learning on to spot issues in lay people’s legal questions and if there’s one thing that machine learning needs it’s a lot of data, and so we just had to make that data.
Jim Calloway: Well that is extremely interesting and what exactly does this project hope to accomplish?
David Colarusso: Well there are three main things that it’s looking to do. It’s the creation of this taxonomy that Margaret’s lab was already working on. The creation of a labeled data set that will make publicly available for other people to use for either benchmarking purposes or to train their own machine learning classifiers, and they were trying to take this sample of text from Reddit, and we’ve put together a game that allows people to go, this Learned Hands Game and the idea there of course to joke, which makes sense when you think about it as many hands make light work, right?
So it’s Learned Hands. We ask people to come in, they’re presented with a question, that someone has asked on reddit and then asked to classify it as being one of a set of different issues that have been identified in the taxonomy and then we use that. We combine those together and then you have a labeled data set.
So you have some bit of text and it says this bit of text, these three or four issues are present and then we can use that to train a machine learning classifier to basically spot whether or not those issues are new text.
So what you do is you have to get hundreds or thousands of those together and so we’re having a lot of people come in, play that as a game and use that to create this label data set. So that’s our second thing.
And then the third thing is eventually you want to train these classifiers, so actually make tools that you can feed it some text and it will spot, and basically spot these issues and say, oh I think there’s a good chance that these three things are there.
And then of course, all of these together are really kind of a Trojan Horse if you will to get people to adopt to the taxonomy. So the taxonomy that we are working with is the second version of NSMI, which is the National Subject Matter Index which is a taxonomy that some folks on civil, legal aid will recognize.
And the idea there is that it serves as a common language then that people can use to talk about their issues. So whether or not they are going to use the classifier itself, the classifier becomes the act that sort of gets people to want to use it, and then by adopting that taxonomy they label all their contents with that taxonomy and then now all of a sudden, if someone says, hey I need issue 275, then someone else knows what that means, and that’s someone else at their computer can do it really quick and this helps to get people to where they need to be.
Sharon D. Nelson: Well, I think you might have answered a part of this question already, but why is it so important that you create this new taxonomy?
David Colarusso: Well the example I would like to use is, we all know that attorney in the office who still uses WordPerfect, right.
Jim Calloway: Yes.
Sharon D. Nelson: Yes we do.
David Colarusso: They send over their brief and you try to open it up and you can’t open it up, right. So the idea is that there needs to be standards in the way that people communicate information and so what the taxonomy does is it provides a standard format for people to be able to talk about the same issue. This becomes really important in a lot of different areas, but you can imagine, let’s say just in web search. If you’re searching for some issue, if you can imagine a world where all of these issues were in the metadata of the webpage. So, on the part, not the part that the people read, but the part that the machines read, then it could recognize that oh, these three pages are about custody issues. And then it would be able to group them together and more reliably get that information to people when they’re searching for that.
So it’s really about having a common language so that different people can be talking about the same thing in the same way which really allows them to cooperate better. And of course when I say different people, I mean different people and machines.
Sharon D. Nelson: Oh boy.
Jim Calloway: Oh this new world we are going into. So David, how do you see the fruits of this project being used ultimately?
David Colarusso: Well, so obviously there’s this idea of people building tools that can talk to each other, which is sort of central to the goal we’re going for, but this is a place where people always want to talk about is — what are these classifiers and what can they do. And really it’s basically about feeding in some text and doing some issue spotting, and the two sort of used cases that I really see that I find exciting is consider these sort of online sort of lawyer for the day sort of setups.
So something like a lot of different Bar Association will run these where you go to a website and it says, ask your question and basically that question gets routed to an attorney on a limited representation basis and they help answer your legal questions.
Now sort of the dirty little secret behind a lot of these sort of projects is that a lot of questions go unclaimed by an attorney. And it’s easy to see some of the reasons why that might be, right. If I’ve been doing housing law for 30 years, I don’t want to pick up someone’s IP question. So what has to happen is when someone comes in with a question it has to somehow be labeled appropriately, so it can be routed to the right volunteer attorney.
So one of the things that something like these classifiers could do is just read over that text and say, oh, well it has these three issues in it that seems relevant for our volunteer attorney over here. Let’s route it to their attention. So just being able to sort of queue up things and direct them to the right people, is one of the used cases that I’m really excited about.
And then in a similar way you can imagine it being used by folks looking for resources on say a court service center website, and where they come in and they have a question but they don’t quite know the right term of art to use to find it. So right if you know what you’re looking for, you can just search for it in the search bar, but if you’re coming in you’re saying, oh well, I’m getting a divorce and there’s something to do with the kid, and you don’t know what to call a custody issue, then there are ways that people probably talk about custody issues that are common.
And so what this will do is it will learn over time because we’re training on people’s real questions, how people actually talk about issues without using legalese, and what that allows us to do is to be able to translate these lay people’s questions into that taxonomy.
And it says hey, it’s this type of issue and then now you can match that up on the backend and just say well, they have this type of issue, what resources do we have that are relevant, and then you just match people up.
So those are two of the used cases that I’m really excited about, really just sort of lessening that distance between people expressing their needs and getting the resources they need.
Jim Calloway: Well, before we move on to our next segment, let’s take a quick commercial break.
Feel like your marketing efforts aren’t getting you the high value cases your firm deserves, for over 15 years Scorpion has helped thousands of law firms just like yours attract new cases and grow their practices. As a Google Premier Partner and winner of Google’s Platform Innovator Award, Scorpion has the right resources and technology to market your law firm aggressively and generate better cases from the Internet. For more information, visit scorpionlegal.com/podcast.
Sharon D. Nelson: Is your firm experiencing missed calls, empty voicemail boxes and potential clients you will never hear from again? Enter Answer 1 Virtual Receptionists. They are more than just an answering service. Answer 1 is available 24×7. They can even schedule appointments, respond to emails, integrate with Clio, and much more. Answer 1 helps make sure your clients have the experience they deserve. Give them a call at 1-800-answer 1 or visit them at answer1.com/podcast for a special offer.
Sharon D. Nelson: Welcome back to The Digital Edge on the Legal Talk Network. Today, our subject is ‘Using Gamification of Access to Justice to Train Artificial Intelligence’, and our guest is David Colarusso, the Director of Suffolk University Law School’s Legal Innovation and Technology Lab.
So David, how in the heck do you go about creating a taxonomy?
David Colarusso: Well, so this is where really the person that would better be able to answer this question is my partner in crime over at the Stanford’s Legal Design Lab Margaret Hagan. Their lab has now working has been taking point on creating this new taxonomy, and what they have done basically is they take in an existing taxonomy, the NSMI, the National Subject Matter Index and which is used by some people in the civil legal aid space.
And has basically said, well, where are there are holes in this and what can we add to it, and then also where is there duplication, where can we now combine things together and sort of try to bring it from this taxonomy that right now has like 2,000 some odd entries into something sort of more manageable.
The idea of being able to create this sort of hierarchical structure of say about 20 parent categories and then children categories under that, which is to say like the difference between say family law and housing law and then within that, you might have a breakdown of issues, sort of eviction or contract issues, et cetera.
So what she has done basically is she’s doing a lot of user interaction, so she’s talking to subject matter experts, she’s having them do sort of card sorting exercises, where she’ll say well, what are the things that you think are important inside this area and then seeing if they sort of agree with what she’s hearing from other people, and then we’re also doing some machine learning stuff where we’re looking at the text that we have now, and we’re sort of trying to do latent topic modeling which is just basically a sort of a fancy way of saying where we find groups of questions where people are talking about roughly the same thing which to say they’re using the same words, let me go on, we say, well, there’s like five clusters in these housing questions people are asking about, what do those correspond to? So using that to also interrogate these questions we have from Reddit which actually I hadn’t mentioned, we have 75,000 questions that the moderators over there have provided us with.
So that’s a pretty good data set, although as I’m sure we’ll talk about later, it’s not the end-all be-all, because the people on Reddit are not representative of all of our clients, but that can be of something we talk about more later.
So yeah, so it’s just a bunch of talking to subject matter experts and testing things out and making sure we got all our bases covered.
Jim Calloway: David, I follow somebody on Twitter who likes to pull out the craziest question and answer sets from that particular forum, there’s been some entertaining things posted. So how do you go about creating these text classifiers and how do you know they’re accurate once created?
David Colarusso: Obviously the key is getting people to label these texts, and as you said, people go through and read our legal advice questions sort of just for fun. So we’re hopeful that one of the things that will happen is that people will want to go and play our game just so they can read these questions and then also provide something back to the community.
Actually if you’re — if anyone listening is with a Bar Association or a firm that’s giving up pro bono hours, we think actually playing this game should count. So talk to us and we can help you out and maybe help out a bunch of other folks.
But basically what we do is we have these labels and then we take those labels and we train them, the sort of term of art we use when we talk about machine learning. So the example I tend to use is I say think about your email filters, right. So I work at a university so I have a filter that filters the emails that come in from my list emails and they go into a little folder called free food.
So you might think about well what are the things I’m going to look for in emails to say throw them into the free food filter. So I have basically if the word pizza shows up, if the word cheese or wine shows up, that basically gets thrown into that free food filter.
Now, I was able to build those sort of keyword searches based upon my knowledge of sort of free food emails that come across the transom. What we do is we basically have people labeling the text, and saying, thing this is housing law, this is family law, without any knowledge from the Machine side not telling it to be looking for specific words, but we build up enough examples of what happened is that the machine can go in it can find patterns.
Some of that patterns has to do with finding specific words, some of it has to do with a little more complicated than just word search, we have some sort of semantic things that are coming up, but basically it’s taking that text, turning it into some numbers and then looking for patterns in those numbers, and being able to try to find spot those patterns and other new texts as they come in.
And so, the way we know that we’re doing any good is we basically we take the data we come in that’s labeled, take about 20% of it, we set it off to the side, we then train our model on that remaining 80%, so it comes up with some model that it thinks it can take in a new text and make a guess as to what’s going on there.
And then we take that same model and we test it now on that 20% we held out that had never seen before, and then we look and we see how well it does and we look across a number of metrics. Accuracy is not the only metric we look at, so obviously you can make a really accurate model in some cases quite easily, right. I can make a 98% accurate model about whether or not there’s going to be a snow day here at the University by just always guessing now.
Sharon D. Nelson: So why does machine-learning need labeled data daily?
David Colarusso: Yeah so that label is what’s providing the signal. Since it doesn’t have the knowledge of say when you say you’re looking for free food that you need to know, okay well, I should look for food, what type of food people ask, it’s not making reasoning from first principles. So what it’s doing is it’s taking and looking for patterns, and so that labeled data provides a lot of example that says here’s a bunch of things of one type and here’s a bunch of things of another type and then it looks for patterns, and to benefit there’s you don’t have to explicitly code what it is you’re looking for.
So I don’t have to tell it look for pizza, look for cheese, and enumerate every possible way someone’s talking about something. Hopefully, those things have been enumerated in the examples I have, but I don’t have to think about that as the person programming it. And the important thing here is we’re trying to catch people as they’re really talking about their issues not using the legal terms of art that we’re used to saying.
So someone’s not going to come in and say they have an issue of collateral estoppel. People are going to talk about things in their own way and so what we want to do is make sure we have the labels that are there, sort of in, sort of our known terms and we can train the machines to see that.
Jim Calloway: How does the gamification aspect help?
David Colarusso: Well, so the gamification here is really it goes back to the title, it’s been many hands make light work. So what we’re doing is we have this game and I suppose at some point, I should share the URL. So if you go to learnedhands.law.stanford.edu, you’ll find the game there and what happens is you’re presented with these questions and then asked to label them and you can do all sorts of things like you’ll see, there’s a leaderboard, so you can compete against your friends, so all the Gunners out there from law school you can relive your glory days.
And the idea is that hopefully that motivates people to come in and do the work of doing the labeling, because it’s the type of thing where you need hundreds or thousands of examples and so we need to get people involved and what we do is we try to involve, we don’t set any limits on who can get involved and we use the wisdom of the crowd to basically be able to figure out whether or not something is a label or not.
So we’re sort of clever about this and that we don’t have a fixed set of time someone has to label something as either there or not there before we say it’s there. We make some statistical assumption that basically it sort of works as if the first time someone sees a question, if one person says it’s there okay, it’s great, and then ask someone else and if enough people agree then they can say okay, we’re done with this, that thing is really there.
If there’s disagreement then it basically holds on to that question, shows it to more people until it can get confident and the confidence we’re looking for is a 95% confidence level. And so basically everything that is being labeled we can stay with this 95% confidence level that the — that’s the true percentage – that the true percentage of people who would label it as either being present or not present is more than 50% of the folks on are playing the game.
So that’s really sort of a way to sort of group everyone together.
David Colarusso: So that’s really sort of a way to sort of group everyone together. They’re not quite doing, it’s not quite a one vote, one person-one vote sort of scenario, but it statistically as many votes as we need to be sure or dissure about whether or not to label.
So gamification is a way to sort of spread out the work and hopefully get people doing something and doing good by having some fun.
Jim Calloway: Before we move on to our next segment, let’s take a quick commercial break.
Looking for a process server you can trust, Serve-Now.com is a nationwide network of local prescreened process servers. ServeNow works with the most professional process servers in the country. Connect your firm with process servers who embrace technology, have experience with high volume serves, and understand the litigation process and the rules of properly effectuating service. Find a prescreened process server today. Visit Serve-Now.com.
Sharon D. Nelson: Imagine what you could do with an extra 8 hours per week. That’s how much time legal professionals save with Clio, the world’s leading practice management software. With intuitive time tracking, billing and matter management, Clio streamlines everything you do to run your practice, from intake to invoice. Try Clio for free and get a 10% discount for your first six months when you sign up with the code TDE10. Of course, you can find Clio at clio.com. That’s clio.com.
Sharon D. Nelson: Welcome back to The Digital Edge on the Legal Talk Network. Today our subject is Using Gamification of Access to Justice to Train Artificial Intelligence. And our guest is David Colarusso, the Director of Suffolk University Law School’s Legal Innovation and Technology Lab.
We hear a lot David about algorithmic bias. Tell me how it applies here and how you solve the problem if it presents a problem?
David Colarusso: Yeah, so normally when people are talking about algorithm bias, they are talking about baking in the biases that already exist out in the real world.
So if we’re talking about feeding a bunch of data to a machine and using that to spot patterns, if there are some discriminatory patterns in that existing data, they’re going to be replicated by the machine.
In this case, what that would look like is maybe only being able to recognize the problems of the people whose questions we have. So this main corpus, this main collection of data we have is people’s questions on our legal advice.
Well the folks using our legal advice tend to skew young, male and white, and that’s not necessarily the population that we’re seeing in all of our legal service agencies. So that could present a type of bias in that the machine would only learn to recognize the problems as that population was asking them.
So what we really want to do to address that is we’re actually interested if you’re out there and you have your own data sets, so you are a legal services agency and legal, and some legal aid office, and you have questions that look like these, so cold call emails, questions from your website and you think they could be useful then we’d love to actually be able to take those in and in a secure way label them internally.
So we can either make it so that only you label them or only some of our staff label them, so we won’t make it available on the big game for everyone to label, but the idea that we can take that data and label them and then we can use that as part of a label data set when we train our classifiers. So that our classifiers aren’t just recognizing the pattern in one populations’ questions, but they can see that the patterns across multiple populations, and that makes the classifiers better and makes them more robust and avoid that bias, and it can come from just selecting questions from some folks.
Sharon D. Nelson: That’s such an interesting question because I’ve always heard about the bias in terms of for instance, when they use algorithms to determine whether someone is more likely to commit a crime again, that the bias is basically ethnic and so in this case, I guess you’re quite right. The people who are doing all this for you and playing the game they would come largely from a particular kind of population and they might miss a lot that would relate to the same issues, but they wouldn’t recognize it necessarily.
David Colarusso: Yeah, and the issue isn’t so much with the labelers because the labelers are hopefully identifying fact patterns in the data. It’s in the way people are asking their questions or communicating this fact.
Sharon D. Nelson: Right. Right.
David Colarusso: So some people might be using something that sounds like legalese and some people might be saying I was kicked out of my place.
Sharon D. Nelson: Yes. Yeah, I got it, but boy that was really interesting.
Jim Calloway: As somebody who was in private practice for several years, it’s sometimes amazing how people call in to your office and self-label the questions in very unusual ways.
Sharon D. Nelson: Yes. Indeed.
David Colarusso: It’s never sovereign citizen.
Jim Calloway: Well, obviously many legal questions are deeply personal, how do you address issues of privacy around these legal questions?
David Colarusso: Right, so there is a couple of layers of this answer. So the first one which I sort of talked about before which was that the questions that are being used in the game portion that is open to anyone to vote on, are questions that were posted to a site that described itself as the front page of the Internet, right. So there was no expectation there that this wasn’t going to be shared with the world and also there are like I said policies in place in that forum to remove PII and moderators that work with authors to do that.
And so actually the questions we got came from the moderator, so over our legal advice to engage with that community. So that’s on the labeling the existing data.
As far as if in these other questions we were saying if you have data talk to us, we might bring it in. The idea is there is we would label it and then we can train classifiers on it and then what we share with people is not that labeled data. So when we talk about sharing the label data in the end, we’re obviously only talking about sharing the label data that is already public, the text is already public, so that basically that Reddit data.
That other label data we can use to train classifiers and we can share the classifiers, so the community can get the benefit of that more robust model, but that didn’t actually require us to share the underlying data we use to train the classifier. And so, on that way, we can take in and get the benefit of data from a bunch of different groups, but not undermined that privacy or the confidentiality of that information as we share out the classifiers.
And then as we go forward, eventually these classifiers start getting used. We are going to be looking at opportunities for collecting the data as it gets used from the populations that are actually engaged with it, so we get a more diverse set then. But there we will also have sort of very thoughtful constraints on how — when we stored that data.
So let’s say if it’s on a website where it’s clear this question is not going to be addressed to a lawyer, so no one expects there’s going to be an attorney-client privilege, then we might allow people to opt in. Yes, you can use my data to help improve the algorithm sort of thing.
But if someone is doing something like these legal questions online, where even though there might not be attorney-client privilege at the beginning because you haven’t actually talked to the attorney yet, if there is an expectation that something like that might occur then we, when we create these classifiers might make it such that they actually only store the information long enough to be able to make a classification and then purge that. They will never actually have it. It’d only be available to the folks who end up the attorneys and ends up with in the end.
So we’re going to be thoughtful about this. So I guess the answer is being careful about what we do or don’t release as far as label data and then we’re being thoughtful about how we collect data in the future as we actually implement the system.
Sharon D. Nelson: So how would you evaluate the success of Learned Hands thus far?
David Colarusso: I’m going to say its successful enough that everyone should play but not so successful that you should think you don’t have to play. So we’ve been hunting for a couple of months now and just checking the numbers here and we have just around 400 folks have actually signed in and done actual labels.
So we’ve had thousands of folks coming visit the site, but as far as people who are have actually engaged with the game and labeled things and we have about 400 or so users are active monthly users, so people who come back then use it sometime within a month, the numbers, they have been varying but they are somewhere between 70 some-odd up to like 100 folks a month. So that’s pretty good using this last month.
So that’s a pretty good number of sort of critical mass of folks who come by and keep playing the game. And so that’s right now the main measurement of success we’re looking for, because what we want to do is we want to get enough people labeling the stuff to get a good data set we can use the train our classifiers.
So we started to train some preliminary classifiers on the data we have and those actually come out doing pretty well, which is to say they’re better than a coin flip as far as guessing whether or not something is there, and there’s a little more nuance to that, but I get into in some of my writings something I don’t know if we could throw a link up or I mean I could talk forever but the point is that they’re better than just guessing and they will only get better the more data we get.
So really what I’d encourage listeners to do is to go to learnedhands.law.standford.edu and play the game, because then they can actually help make it more successful and really what we’re trying to do is the more we get, the more quickly we’ll be able to create these classifiers and the quicker we’ll be able to get a tool out to folks that they can use to help get people to the resources they need.
Sharon D. Nelson: Well, I think this really is a very creative idea. I love how gamification sometimes really makes a difference in how rapidly you can progress and the quality of the work, and this is just very, very, very creative use, and I love that it has such a noble purpose to it too.
So thank you for being our guest today David and talking to us about the project.
David Colarusso: And thank you for having me, and I should mention that all of this work is being a generously funded by the Pew Charitable Trusts, so I’m there, they’re also someone to thank for this.
Sharon D. Nelson: Well indeed, we thank them as well.
And that does it for this edition of The Digital Edge Lawyers and Technology. And remember, you can subscribe to all of the editions of this podcast at legaltalknetwork.com or on Apple Podcasts. And if you enjoyed our podcast, please rate us in Apple Podcasts.
Jim Calloway: Thanks for joining us. Goodbye Ms. Sharon.
Sharon D. Nelson: Happy trails, cowboy.
Outro: Thanks for listening to The Digital Edge, produced by the broadcast professionals at Legal Talk Network. Join Sharon Nelson and Jim Calloway for their next podcast covering the latest topic related to Lawyers and Technology. Subscribe to the RSS feed on legaltalknetwork.com or in iTunes.
The views expressed by the participants of this program are their own and do not represent the views of nor are they endorsed by Legal Talk Network, its officers, directors, employees, agents, representatives, shareholders and subsidiaries. None of the content should be considered legal advice.