COVID-19 Resources for Lawyers
Featured Guest
Ed Walters

Ed Walters is the CEO and co-founder of Fastcase, an online legal research software company based in Washington, D.C....

Your Hosts
Sharon D. Nelson

Sharon D. Nelson, Esq. is president of the digital forensics, managed information technology and cybersecurity firm Sensei Enterprises. Ms....

John W. Simek

John W. Simek is vice president of the digital forensics, managed information technology and cybersecurity firm Sensei Enterprises. He...

Episode Notes

Lawyers are excited by the potential of new tech tools using AI, but how can they be sure they’re legal and ethical? This lack of clarity has led the ABA to call for a deeper look at these tools and their use in the profession. In this Digital Detectives, hosts Sharon Nelson and John Simek are joined by Ed Walters to discuss the recently passed resolution urging the study of issues surrounding artificial intelligence. They discuss the importance of moving forward with technology in the profession while still ensuring that new tech is held accountable through audits and product transparency. They also review some of the positive and negative things going on in the AI space and give their take on its future.

Ed Walters is CEO and co-founder of Fastcase.

Special thanks to our sponsor, PInow.


Digital Detectives

ABA Passes Resolution Urging Legal Profession to Study Legal and Ethical Issues of AI





Intro: Welcome to Digital Detectives, reports from the battlefront. We will discuss computer forensics, electronic discovery and information security issues and what’s really happening in the trenches; not theory, but practical information that you can use in your law practice, right here on the Legal Talk Network.




Sharon D. Nelson: Welcome to the 106th Edition of Digital Detectives. We are glad to have you with us. I am Sharon Nelson, President of Sensei Enterprises, a digital forensics, cybersecurity, and information technology firm in Fairfax, Virginia.


John W. Simek: And I am John Simek, Vice President of Sensei Enterprises. Today on Digital Detectives our topic is, ‘ABA Passes Resolution Urging Legal Profession to Study Legal and Ethical Issues of AI.’


Sharon D. Nelson: Before we get started, I would like to thank our sponsor. We would like to thank our sponsor If you need a private investigator you can trust, visit to learn more.


John W. Simek: Today our guest is Ed Walters, the CEO of Fastcase, a leading publishing company based in Washington, DC. He is also an adjunct professor at Georgetown University Law Center and at Cornell Tech where he teaches the Law of Robots, a class about the frontiers of law and technology and this fall the Law of Autonomous Vehicles. He is the author and editor of ‘Data-Driven Law’ with Taylor & Francis in 2018.


It’s great to have you with us today, Ed.


Ed Walters: Great to be here, John.


Sharon D. Nelson: Well, before we get to the ABA Resolution itself why don’t you tell us a little bit about what you do at Fastcase and how it involves Artificial Intelligence, Ed?


Ed Walters: Sure, Sharon. So, our work at Fastcase has been over the years legal research software and we’ve used algorithms and good software techniques to make law more findable for lawyers for I guess 20 years now, this November will be our 20th anniversary. And increasingly those software techniques really use Artificial Intelligence, so we never sort of build Fastcase as an AI legal research service, but we do use Artificial Intelligence in a lot of ways to prepare judicial opinions to find data within them, to prepare things for Docket Alarm. When we download hundreds of millions of docket sheets and the documents underneath them we use a lot of Artificial Intelligence and then over the last year really using AI Sandbox which uses Artificial Intelligence to help law firms structure their own data and to combine it with public data from Fastcase and from Docket Alarm to really create new products.


So law firms are working with Fastcase in AI Sandbox to really use that next generation of AI tools, from really simple things like natural language processing to some of the tools in the deeper end of the pool like BERT or ELMo or Amazon comprehend to really create a new generation of legal services for their clients and to make differentiators for law firms.


I always say like it’s really hard for law firms to differentiate themselves when they’re competing for business, what do you say, our firm represents the highest standards of honesty and ethics when it’s bet-the-company litigation where people you can trust. We have blue chip lawyers. It’s all kind of the same thing coming from law firms.


And I think in this next generation what’s going to differentiate them is tools, law firms who are mining their expertise to try and create new products for clients. So that’s what AI Sandbox is about where we’re working with Artificial Intelligence to really mine Fastcase data and mine law firms data to create that next generation of tools that help law firms differentiate themselves.


John W. Simek: All right, let’s get to the ABA Resolution, Ed, and can you tell us what the resolution says and why do you think the ABA felt that it really was needed?


Ed Walters: Sure, John, and the resolution is really short. It says I think in full, resolved that the American Bar Association urges courts and lawyers to address the emerging, ethical and legal issues related to the usage of Artificial Intelligence (AI) in the practice of law, including, one, bias, explainability and transparency of automated decisions made by AI.


Two, ethical and beneficial usage of AI; and three, controls and oversight of AI, and the vendors that provide AI.


I mean, that’s the resolution and I actually wasn’t at the Annual Meeting but I have a lot of friends who were there and who followed this very closely, and from what I understand the idea is that law firms are using AI tools increasingly.




Some of them are point solutions, so the software that has AI built into it, but we have seen a kind of a maker movement in law firms where people are creating their own tools and I think the ABA wanted to make sure that the Association is out ahead of this that we have ethical standards for using Artificial Intelligence in the practice of law to make sure that the tools that people are building are used ethically.


I also hope that the idea is that the ABA is signaling in a strong way that AI is presumptively ethical, that it is okay to use Artificial Intelligence. If I can climb on my soapbox here for a second, John, you are used to be doing this; for a long time law firms have said we can’t adopt new technology because we are worried about the ethical implications of it, and until it has been deemed presumptively ethical we are not going to change.


And I think in some ways law firms have used ethics as an anchor to hold them back from moving forward, and anchors are good, anchors are important especially in changing times. But sales are important too, and I think in a lot of ways ethics can be used as both, and there are explicitly ethical provisions in the Model Rules of Professional Conduct that are designed to be used as sales.


So if you look at Rule 1.1 a lawyer’s duty of competence and specifically comment 8 to that, it says the lawyers have a duty to be reasonably competent in the technologies that are important for practice and comment 8 says that includes software technology. That really says, it doesn’t say you have to use this specific piece of software, but what it says is that as times change and as technology changes and especially the technology used by clients, lawyers have an ethical responsibility to keep up.


Rule 1.3 of the Model Rules is similar, it says you have a duty of diligence which means you have to be reasonably diligent in the practice of law, and I think that means if there is a better way, a more efficient way, a more accurate way to accomplish a legal task with software you are required by the Model Rules to use that.


So what I imagine here is, a good example is electronic discovery. If there’s a discovery project in litigation that involves tens of thousands, tens of millions of pages of data hiring paralegals to manually go through tens of millions of sheets of paper one at a time is probably a breach of the duty of diligence in a world where e-discovery can do that faster, cheaper and more accurately.


And I think Artificial Intelligence will be the same way, not that we’ll have to hire a robot lawyer or something, but there will be plenty of places where AI can garner insights the lawyers won’t be able to and where we can show that those insights are more accurate or more helpful for clients.


I think ethics will be a sale. They will require us to move forward, and so, I think it’s important to see those ethical rules as both an anchor to hold the profession in place in times of great change and a sale to move law firms forward in times where the rest of the world is moving forward and AI is just right in the center of that.


So I hope the ABA has both things in mind that we’re using this resolution as an anchor to make sure that all of the honesty and integrity and transparency of legal services overtime, all of the ethical responsibilities are projected on to this new frontier and I hope that they’re saying as well that Artificial Intelligence is presumptively ethical and that where AI can help us to serve clients better more accurately and more efficiently we should be using it, it’s a sale to propel the practice forward.


John W. Simek: As a graduate of the U.S. Merchant Marine Academy I want to thank you for that maritime analogy.


Ed Walters: You see what I did there, I mean — I want to make sure that I’m eating you where you live.


Sharon D. Nelson: Well, it’s funny, Ed. I don’t know if AI is presumptively ethical the way they’re looking at it, I think that there are conditions to be met to make sure that it complies with ethics and we’ve seen issues where AI has not been very ethical because it has not been transparent and they won’t talk about how it works and that they call that a black box, you don’t really know how it works, and I think that’s part of what this committee is trying to do. In other words there are issues of ethics in AI that I think both of us find troubling when it comes to the practice of law. So maybe you could talk a little bit about those.




Ed Walters: Sure. I think black box decision-making is one thing that really concerns me. So, a lot of Artificial Intelligence involves things like regression analysis, it’s not super-sophisticated. You can have all kinds of problems with AI like reading past decisions as a guide to future ones, when past decisions are full of problems, full of bias.


We took a little look at here at Fastcase to try and really understand the law with Artificial Intelligence and one of the challenges we came up with right away was that some large amount of American law, probably two-thirds of it, predates the Civil Rights era. And so a lot of law will be not only wrong but misogynistic or racist. And so, bias in the data set is really important.


There’s also all kinds of errors in statistical analysis or Artificial Intelligence, and it’s important especially in the early days to be able to audit the decision path for Artificial Intelligence to see if you can find those errors.


I’ll give you one easy example of this. In the most famous man versus machine contest of our age where IBM Watson competed against Ken Jennings and Brad Rutter in Jeopardy. In a Final Jeopardy segment, the category was US cities, and the answer for Jeopardy was its largest airport was named for a World War II hero, its second largest for a World War II battle. And so what US city is going to be the question for this answer in Jeopardy.


Now, IBM Watson has these algorithms that decides when it buzzes in or not, if it’s not very confident and the answer it won’t buzz in and embarrass itself, but that’s not how Final Jeopardy works. In Final Jeopardy, you have to answer the question.


So Watson says for the category US cities, its largest airport was named for a World War II hero, its second largest for a World War II battle, the US city was what is Toronto. So IBM Watson isn’t like a database, it’s not searching within a category of US cities. There’s a lot of relational logic that is used to try and answer these questions, right, and the IBM Watson team was able to go back afterwards and audit the decision logic. So it knew that Watson had like a very low confidence level. It knew that Toronto has a team that plays in the American League in baseball.


So there’s a lot of factors that sort of go into that answer that IBM was able to audit because it could see inside of Watson’s decision-making process. But it was a mistake and AI will make mistakes like this Toronto decision. The problem is we can’t always see what’s under the hood.


I’ll give you another example that illustrates that problem in kind of practical terms for lawyers. There’s a case in Wisconsin called US v. Loomis, where the court was using Artificial Intelligence to set bail. And the defense lawyer said, I think this bail is high and I’d like to see the basis that you use to set it. And the court said, well, it’s AI so I can’t crack the thing open for you. But the lawyer actually said, no, I’m going to ask the court, go to the vendor who created this bail software and show me what the software used to make this decision because my client isn’t dangerous, he doesn’t present a flight risk, why is the bail so high?


So the court went to the vendor and the vendor said that’s our proprietary trade secret, you can’t audit it, can’t open it for court, get lost. And so this was the basis for an appeal of that bail that went all the way up to the Wisconsin Supreme Court and the Wisconsin Supreme Court said, yes, this is a black box, but in what I consider kind of a punt.


It said the court didn’t strictly use the software to make the decision. The court took the advice of the software under advisement and made his own independent decision, which happened to be the exact bail if the software suggested.


So to this day we can’t see what factors the court used to set the bail in that case. Now, the court was using software to set bail in order to avoid bias to take some of the factors like race or class out of the decision and to use software in a kind of a cold unbiased way to set bail that only considers factors that are permissible.




But the question is what factors did it consider into those track with race and class? Is it based on the zip code of the defendant or some other thing that tracks some protected class status? Well, we just don’t know and so that’s really the explainability issue for AI. I think that a lot of solutions that use it will hopefully in law give some sort of an audit trail.


There’s a trend in computer science right now called XAI or Explainable AI that puts a kind of audit layer on top of the decisions that will explain out step by step how AI reaches certain conclusions but so that black box concern really is important especially because we think Artificial Intelligence won’t be infallible. There will be places where it’s wrong and the ability to see inside the black box feels important.


Sharon D. Nelson: Ed, I think there’s a follow-up to that story. If I remember correctly, Loomis was in fact convicted to six years in prison and compass played a part in that as well and that was appealed but cert was denied. So I think that’s an even greater issue is putting somebody in prison and I would wager though I don’t know that he was regarded as a greater risk because he was African-American. Isn’t that correct? Am I wrong about that?


Ed Walters: I don’t have the follow-up. So I think presumptively you’re almost always right, let’s just assume you’re right here.


Sharon D. Nelson: Well, I do know he got six years in prison and I know that a lot of these black boxes are fed with old decisions and the old decisions were racially biased which means that the new predictions are racially biased and that the crime he was charged with didn’t require any prison sentence at all, in any event.


I’m sure that it’s clear to our listeners that there are major-league problems, it’s garbage in garbage out again depending on what we’re feeding these things. If we’re feeding them 200 years of racially motivated judicial decisions, that’s probably not going to result in a fair analysis under our way of thinking today.


Ed Walters: Can I share one other concern that I have about this?


Sharon D. Nelson: Yeah, sure.


Ed Walters: Smart machines that make us dumb, I think is the topic header. So I have an 11 year old son and I wonder if he’s ever going to learn to drive. By the time he is driving there will be rideshare and may be fully autonomous cars and so he may not ever learn to drive, but there may be circumstances in his life where he has to drive, and I confess that I worry about that a little bit.


And so in our world, when machines set bail, will courts remember how to do it, will they even be able to audit the decisions that are made by software, will we have a good basis for judges to be able to understand how they work enough to override the decisions that are bad by software or for the practice of law, right? There are all kinds of services now that are purporting to use AI to assist lawyers in certain tasks, some better than others, but imagine so in my world, like legal research, if you type a question into a legal research service and sort of get out an answer, do you know when to trust that answer or when to question that answer, if that’s all you’ve ever done?


I can use a legal research service today and run a keyword search and then evaluate the search results and try and figure out which ones are right for my case. If we move to a more AI-based system, we will probably get some very interesting insights and some different answers that we wouldn’t with clunky keyword searches.


And if we lose the ability to independently verify them, have we lost something important, will we be more subject to Toronto like mistakes by AI if we’re independently unable to verify the answers that come out.


And so I just want to make sure during this time where we use AI tools, where we’re able to derive newer and better insights and new products for our clients which I think are really, really important. I’m a huge booster for AI. I want to make sure that we don’t lose that important human judgment aspect to it at the same time.


I saw a story recently about someone who was driving her car using Google Maps and Google Maps that go straight she was in a neighborhood, she didn’t know very well and so she drove straight down a boat launch into the bay of Georgia. The maps that go straight and so she kept going straight and never even really looked up, apparently never hit the brakes at all so just drove the car straight in. She had to be pulled out of the car, she was fine.




But she over-relied on artificial intelligence and wasn’t able to make a very common human judgment that she would have under any other circumstance, but over-relying on artificial intelligence she made a bad mistake.


And so I think the best researchers in the field right now are saying that we shouldn’t think about artificial intelligence like C-3PO; this is Dazza Greenwood’s analogy, we shouldn’t think about it like C-3PO, like an autonomous individual agent. We should think of it like Iron Man. This is augmenting technology that will sit on top of human judgment, that will inform human judgment, that will help humans make better decisions, but it’s still humans making the decisions, and so that’s what we want to preserve, and I think that may be what the ABA resolution is pointing at.


John W. Simek: Ed, do you think that that’s really the realistic solution to the things that we are talking about? Is augmentation as opposed to letting the machines — we are into war games right, they can launch the thermonuclear missiles, is that really the solution or do you have other thoughts as to — on top of the humans? Is there anything else because at the end of the day I think of this as we are a very capitalistic world and so money is going to drive something somewhere?


Ed Walters: Right. Well, I have to tell you, I am a bit of an optimist about this. So what we have right now in many cases is legal services being delivered by hunch. The hunch is the way that we deliver a lot of legal advice. When clients ask us questions like, how much is this going to cost, or should I accept this settlement offer, or is this contract provision market, lawyers will say like, based on my experience I think you should not take the settlement offer. I think we are going to win at trial. Based on what? Based on four possible engagements in the past; sure, that’s experience, but that’s really a very small dataset, right?


And so, one thing that is very promising about analytics, about artificial intelligence is that it has the potential to augment these decisions, to put a data layer over these decisions. So you can see, look, the firm has handled 34 of these matters in the past, the lowest amount was $14,000 for cost, the highest was $54,000, the mean is 32, the median is 29, and based on this distribution we think that you are — there is an 80% chance that this matter is going to cost you between $27,000 and $34,000. That’s a much more complete answer than it’s going to cost you some unknown number of hours times $290.


So there is data and data analytics that underlies so much of legal practice, we just can’t see them. These are not things that are unknowable, they are just unknown. And so the hope is that some augmented reality, some tools that allow us to see more will help us to give better advice.


If someone says, should I take the settlement offer, I would want to know that settlement offer, how it compares to other settlement offers in cases like this. Is it in the 91st percentile or is it in the 31st percentile? And I would also want to know, what are our chances of winning at trial? Is there a 92% chance of winning at trial or an 8% chance of winning at trial? Those all exist. All of those percentages exist today, we just don’t know what they are.


And so with a little bit of data-driven law thinking, with some maybe AI-based extraction, we can augment lawyer decision making and I think really deliver better legal services, and I think, the conjecture is, in a good market for legal services, clients will pay more for legal services that are based on data instead of hunches, which will mean that people will do them more.


John W. Simek: Well, before we move on to our next segment, let’s take a quick commercial break.




Advertiser: Does your law firm need an investigator for a background check, civil investigation or other type of investigation? is a one of a kind resource for locating investigators anywhere in the U.S. and worldwide. The professionals listed on PInow understand the legal constraints of an investigation, are up-to-date on the latest technology, and have extensive experience in many types of investigation, including workers’ compensation and surveillance. Find a prescreened private investigator today. Visit






Sharon D. Nelson: Welcome back to Digital Detectives on the Legal Talk Network. Today our topic is ABA Passes Resolution Urging Legal Profession to Study Legal and Ethical Issues of AI. And our guest is Ed Walters, the CEO of Fastcase, a legal publishing company based in Washington DC.


Ed, we had a conversation about this subject today at lunch. We have a particular concern that I think we both share about Deepfake videos and audios and their potential to influence the 2020 election. What’s your take on that subject, which I already know, but our listeners don’t?


Ed Walters: You know, for the longest time we have really had a crisis of confidence. I mean dating back maybe to Watergate, like people distrust government officials and the media was really a check on that. The media was allies with people in their quest for the truth. And then for the last few years we have had an attack on that, we have had an attack on the media as a source of truth, you sort of see like a lot of talk of fake news or a media bias.


And I think there is a very strong confirmation bias now. People will only believe the news if it reports what they believe, or people will only watch the news that reports what they believe. So you see like kind of a fragmentation of American society, where people’s accounts of things happening in the world are very, very different and so it reinforces those confirmation biases and divides us even further.


When you add Deepfakes on top of that, I mean I really worry, people who now will say I only believe it if I see it, in a world of Deepfakes they might not even be able to believe that, which means that they won’t believe news, they won’t believe video, they won’t believe even verifiable facts, and so people will only believe maybe what they believe. In a world full of Deepfakes I think the potential is that everything could be seen as fake news. There is no way of independently verifying the truth. If you don’t believe in the media, if you don’t believe what you can see, how do you believe anything?


So I think the only thing we are left with is the bottomless of gut instinct and a kind of inherent bias, and that’s not political bias either; I mean I think that’s true on the right and the left. There really won’t be any way to believe what you see or read or hear. That’s terrifying. The truth is very important. We have a lot of important decisions to make in the country, in the world, and we will have to make them based on facts and that kind of evidence-based decision making is really under siege and I think Deepfakes threaten to make that even worse.


Sharon D. Nelson: Well, you know, we certainly agree with that, the only hope we have is that we — as we detect Deepfakes that becomes a more reliable or unreliable thing, but I think that there are 100 people working on better Deepfake technology to one person working on the defense, so I think we are probably in trouble. I think you are right.


Ed Walters: Can I put you in the guest seat. John, what’s our best hope to detect Deepfakes? Does it leave a signature?


John W. Simek: Well, as Sharon said, you have basically got AI against AI, a machine against a machine, and the detection machines are woefully behind. There are so many less of them than there are those that are generating. It’s not encouraging.


Ed Walters: Should we make them illegal?


Sharon D. Nelson: Well, should we make them illegal, you have to define them. First you have to know that they are a Deepfake, and that’s the hardest thing. The detectors are actually being used. I mean they are used in conjunction, there is the generator and then there is the detector, and the detector teaches the generator that I can detect this; therefore, you need to fix it. So they are working with these things in collaboration with one another, which is even scarier, and so the people who are only interested in detection for good purposes, there is just not enough of them and it’s not advanced enough, but they are getting better and I do still have hope, because I am Rebecca of Sunnybrook Farm personified, I do still have hope that they are finding ways to identify these things on a greater percentage of the time.


John W. Simek: But I think Ed, to answer your question, there is a camp that is actually considering doing something like digital fingerprinting, watermarking digital rights management, those kinds of things. So if you are looking at a video or an audio and it doesn’t possess a signature mark, then you have to assume that it’s not accurate. So there is a move afoot to go that direction as well.


Ed Walters: What if we just required people to disclose whenever they were using Deepfake technology in the output, maybe don’t make it illegal, but just have a Sunshine Law.


John W. Simek: Yeah, the bad guys will do that every time, right?




Sharon D. Nelson: By the time we figure this out it will be too late, that’s what we are most worried about, by the time we get it figured out the election will be over.


John W. Simek: That’s a good segue though Ed, my next question is, what are we getting right with AI and what are we getting wrong?


Ed Walters: Well, one thing I really like in AI right now, especially AI in law is there is a movement afoot to bring unsexy back. What I love about this is that there is all kinds of AI projects that are decidedly not shiny, people are using AI to do some very mundane tasks, but tasks that are really important.


So a good example of this is categorization. So law firms have these document management systems where every document in the firm is tagged with certain attributes; who the author is, what the date is, what the client and matter are, what the thing is. Lawyers are terrible about this. They are terrible about tagging their own documents. And if they did tag them, these document management repositories would be so super useful. You could drive analytics, you could use AI to analyze it, you could do some really wonderful things to expose that expertise for clients, but because the documents aren’t tagged, you kind of can’t do it.


So there are firms who are just saying look, we will — this is a couple of Fastcase AI Sandbox projects right now, we will go through our firm’s document repo and we will use AI to tag things, like who the client is or who the author is, and I mean that is not like a flying car future project, right, but it’s a great use of artificial intelligence. The documents need to be tagged to be useful and the alternative is to have a group of very frustrated bleary-eyed people going through document by document hand tagging them, people who get sleepy or come to work hungover, who get sick and have a sick kid and who make all kinds of mistakes, or AI, which is very consistent and very rules -ased.


And so the technology really isn’t that different from electronic discovery, it’s pretty well proven out. So I mean if you were trying to create a company around this, I mean there is — it’s not — it’s probably not sexy enough to attract the attention of like VCs or anything, but I mean inside of law firms, that’s an awesome use of AI, and I think that unsexy AI project right now is something that we are really getting right.


Here is what I worry that we are getting wrong. I worry that we are overhyping AI, pumping too many expectations into it. There is this cognitive bias, right, we sort of say because IBM Watson can beat Ken Jennings and Brad Rutter in Jeopardy and because I can’t beat them in Jeopardy, IBM Watson must be smarter at me at everything I can possibly do. They can do everything I can better than I can, because I can’t beat Ken Jennings or Brad Rutter in Jeopardy.


Well, that’s not how it works, right? This is a micro world experiment in Jeopardy. They actually had to build the Jeopardy set at IBM to do that game, and outside of that game IBM Watson isn’t like artificial general intelligence, but we keep building it out that way, right?


If you read press accounts, they are breathless with robot lawyers. Every time someone builds like a tool to analyze contracts, you have to put like an animatronic handshake with a well suited lawyer’s hand, and the leads are all, pretty soon artificial intelligence is going to replace lawyers in America. So I worry that the overhype from companies and the kind of breathless accounts in the media will make people have unrealistic expectations about how we can use AI in practice, and when people see that the actual tools are useful, but unsexy, that there isn’t some robot lawyer that walks in and sits down at the desk, they will be disappointed.


And the real good that we can do with artificial intelligence to extend legal services to places where we haven’t, to make legal knowledge more broadly distributed, like Wikipedia did, all of the promise could be flushed when you have a backlash against overhyped AI. You can see it already, people are actually rooting against artificial intelligence because they are so tired of the hype around it.


And so I think that’s something we are getting wrong. We need to be much more clear eyed in press accounts of AI and I think we as a profession need to also appreciate what it does and what it doesn’t do. Please, no more talking about robot lawyers, it’s a scam.




Sharon D. Nelson: I think I have heard you say that in many lectures. Let me see if I can ask you the question that I most wanted you to answer in this podcast and that is, you know of course Ed the series of stories written by Isaac Asimov and collected under the title ‘I, Robot‘, the story of evolving AI did not end well for the humans in that series of stories. And for listeners who don’t know, ‘I, Robot’ the movie involves, and only slightly, one of the stories, you really need to read the entire book to understand how AI brought humankind a dystopian future, and it really is a remarkable book, not so much for the writing as the amazing ideas that he had, the author had.


So do you think that dystopian future is possible Ed and what can we do to avoid even the possibility of such an ending, because I do fear, not today, not tomorrow, but I fear over time what AI is capable of if in fact it becomes unleashed?


Ed Walters: Yeah, that’s the question, isn’t it, I mean is Isaac Asimov writing fiction or history?


Sharon D. Nelson: Well, from the point of view in the book it was history.


Ed Walters: Right. I mean so much of the storytelling, and this is like from the 50s and 60s, right, so much of the storytelling got so much right. One of my favorite stories in that anthology, in the ‘I, Robot’ anthology is the last story, which is called The Evitable Conflict, and I won’t include any spoilers because it’s a great read, but the thesis is that people have handed over so many decisions and so much of the operation of the economy to robots, to artificial intelligence that nobody understands how it works anymore, and when things start to go wrong nobody really can understand why or how.


And that might seem outlandish, but think for a second of how much of the Dow Jones Industrial Average on any day is traded by machines? How much of the NASDAQ is traded every day by software? If the market starts to climb out of control or fall out of control, can we really even understand how or why it did that?


There was a Flash Crash in 2010 that was caused by a guy in his pajamas in England. I don’t know if you remember this. So he had this scam where he would place a bunch of buy offers for stock that were artificially high and he knew they wouldn’t be met; oh sorry, a bunch of sell offers that were artificially high. So he was selling at a price that was higher than the price for that day, and so he knew no one was going to buy, and that’s fine because he didn’t have stock anyway. And so the market would sort of see those and would sort of raise or lower the price algorithmically and then he would arbitrage that and make a bunch of money.


Well, one day he does this and his algorithm hits a brand-new algorithm on Wall Street in the wrong way and like the stock market lost a trillion dollars in value in like 30 minutes. They had to suspend trading and no one understood why. So you have now like probably 10,000 of these stock trading algorithms on any given day interacting completely unpredictably in a way that affects trillions of dollars of value, like your retirement rests on all of these stock trading algorithms playing nicely with each other, but no one in truth knows how they will interact.


If there is a rumor about raise in interest rates, no one really knows what’s going to happen to the stock trading algorithms. And this is just one example, there are many. There are lots of software systems that are based on all kinds of software that people can’t even read anymore. Some of it is written in Fortran or COBOL and large parts of our economy, like the Social Security system, are based on this software operating well, but if you had to audit it, like there is literally almost nobody who can go through and tell you where the mistakes are.


And so this is the kind of dystopia that Asimov was worried about, that we trust and hand over so much of the decision making and so much of the control of really important systems to software that we don’t fully understand that inevitably over time things really will go wrong and we won’t know how to fix them. This is smart machines that make us stupid, run amok, and so this is what I really worry about.


This came to a head 19 years ago with the big Y2K scare 20 years ago, where we were worried that all of these systems that run our economy, that run the Social Security system, that run the IRS were built on mainframes that saved the year with a two digit integer instead of a four digit integer and that they wouldn’t be able to handle the year 2000, they would read us the year 1900.




And we had to do like this gigantic audit of all of our software in the year of 1999 to make sure that we understood how they still worked. But we really haven’t done it again since, and the software has gotten more complex. It is layered layer upon layer on old legacy code all over the place, and by the way, it’s not infallible, we will make all kinds of mistakes.


And so I think the best defense against that kind of world is explainable AI.


Sharon D. Nelson: Well, that’s what this resolution is all about, right?


Ed Walters: That’s right, explainability, it’s right upfront in the very first section. And by the way, I will just say the ethical usage of AI, I just wrote an article about this, a law review article, it’s in the Georgia State University Law Review, if you want to link to it in the show notes, I will send you a link to it, that talks about the Model Rules of Professional Responsibility and how they read it onto lawyers using artificial intelligence.


Sharon D. Nelson: That would be great. Send us a link, if you will. And we just can’t thank you enough for being with us today Ed. Well, first of all, we enjoyed the tour of your office, we enjoyed lunch, and then to get to talk to you some more, that’s a trifecta, so thank you.


I know our audience really listened and enjoyed everything you had to say. It’s always thought-provoking and there is always an element of comedy in there as well. So thank you very much Ed.


Ed Walters: Well, thank you Sharon and John, it’s always fun to be on your show.


John W. Simek: Well, that does it for this edition of Digital Detectives. And remember, you can subscribe to all the editions of this podcast at or in Apple Podcasts. If you enjoyed our podcast, please rate us on Apple Podcasts.


Sharon D. Nelson: And you can find out more about Sensei’s digital forensics technology, and cybersecurity services at


We will see you next time on Digital Detectives.




Outro: Thanks for listening to Digital Detectives on the Legal Talk Network. Check out some of our other podcasts on and in iTunes.




Brought to You by

Notify me when there’s a new episode!

Episode Details
Published: August 29, 2019
Podcast: Digital Detectives
Category: Legal Technology
Digital Detectives
Digital Detectives

Sharon D. Nelson and John W. Simek invite experts to discuss computer forensics as well as information security issues.

Listen & Subscribe
Recent Episodes
The Perils of Biometric Information: Relevant Laws and Insurance Coverage for Biometric Lawsuits

Judy Selby gives a comprehensive overview of the many uses and risks associated with biometric information.

Penetration Tests and Cybersecurity Assessments: What Lawyers Need to Know

Cybersecurity expert Mike Maschke explains how penetration tests help lawyers protect themselves by identifying weak points in their security systems.

The Battle Against COVID-19: How eDiscovery Software is Playing a Role

Maura Grossman discusses how TAR is used by medical researchers to support their efforts to understand and treat COVID-19.

Work-At-Home and Remote Access – It’s Time for a Security Review

David Ries gives an overview of work-at-home and remote access best practices.

eDiscovery Before and After COVID-19: What to Expect

Doug Austin surveys the current state of the eDiscovery industry and discusses emerging trends.

Securing Microsoft Windows and Microsoft 365

Ben Schorr shares tips for improving security in Microsoft products.