Michael Mills is the co-founder and chief strategy officer of Neota Logic Inc., developers of a no-code...
John W. Simek is vice president of the digital forensics, managed information technology and cybersecurity firm Sensei...
Sharon D. Nelson, Esq. is president of the digital forensics, managed information technology and cybersecurity firm Sensei...
Published: | January 30, 2020 |
Podcast: | Digital Detectives |
Category: | Legal Technology |
Is ethical artificial intelligence even possible? In response to growing concerns about issues with transparency, privacy, and AI bias, the American Bar Association passed Resolution 112 on August 12, 2019, which urged courts and lawyers to address the emerging ethical and legal issues related to the use of AI in the practice of law. Digital Detectives hosts John Simek and Sharon Nelson talk with Michael Mills about the landscape of AI in the legal space and what is being done to ensure the ethical and accountable use of AI technologies in the profession.
Michael Mills is the co-founder and chief strategy officer of Neota Logic, Inc.
Special thanks to our sponsor, Logikcull.
Digital Detectives
The Intersection of Ethics and Artificial Intelligence
01/30/2020
[Music]
Intro: Welcome to Digital Detectives, reports from the battlefront. We will discuss computer forensics, electronic discovery and information security issues and what’s really happening in the trenches; not theory, but practical information that you can use in your law practice, right here on the Legal Talk Network.
[Music]
Sharon D. Nelson: Welcome to the 111th edition of digital detectives. We’re glad to have you with us. I’m Sharon Nelson, President of Sensei Enterprises, a digital forensics cyber security and information technology firm in Fairfax Virginia.
John W. Simek: And I’m John Simek, Vice-President of Sensei Enterprises. Today on Digital Detectives our topic is The Intersection of Ethics and Artificial Intelligence.
Sharon D. Nelson: Before we get started, I’d like to thank our sponsor. Thanks to our sponsor Logikcull, instant discovery software for modern legal teams. Logikcull offers perfectly predictable pricing at just $250 per matter per month. Create your free account at any time at logikcull.com, that’s logikcull.com/ltn.
John W. Simek: Today our guest is Michael Mills, the Co-Founder and Chief Strategy Officer at Neota Logic. Before that, he was the Innovation and Technology Leader at Davis Polk and a partner at Mayer Brown. Mr. Mills writes and speaks often on artificial intelligence and other technologies and strategies to improve the delivery of legal services. He’s the Director of Pro Bono Net and President-Elect of the College of Law Practice Management.
It’s great to have you with us today Michael.
Michael Mills: Thank you Sharon. Thank you John. I’m delighted to join you for this conversation.
Sharon D. Nelson: Well Michael, why don’t we begin with you telling us a little bit about your company Neota Logic and how it uses artificial intelligence?
Michael Mills: Sure. And Neota Logic is one of the pioneers in AI in the law. My partners and I, all of whom are lawyers, thought that the technologies of AI could help improve the practice of law both for private practice and in the public domain. We build a platform a no-code development platform that allows people to build applications that use AI, both traditional, symbolic or logical AI, rules-based AI and machine learning by incorporation of other tools. We are working mostly with law firms and corporate legal departments. Although, there’s nothing legal specific about our software.
John W. Simek: So Michael, tell our listeners a little bit about how AI as you see is being used overall in the practice of law today?
Michael Mills: It’s come a long way. When I first got into this field, I think you could say that AI particularly the most advanced and current forms of AI was being used almost exclusively in electronic discovery, but since then it has spread to a number of areas and I think I’ve gotten up to seven of them now.
In electronic discovery, of course, machine learning and technology assisted review have been around for quite some time and are well proven technology. We’ll get to some of the implications of that when we talk about the ABA rules.
Legal research in the early days of the decade wasn’t terribly visible in the AI world and yet we all knew that the big companies like Thomson Reuters and Bloomberg and LexisNexis were investing heavily in AI research and startup companies came along to bring the most advanced techniques into legal research. So there’s a great deal of AI work in legal research that users don’t see as AI they see it simply as better, faster, more pointed, more useful search results.
There’s a lot of work being done in predictive analytics in taking data about court processes and outcomes of prior cases and trying to predict for future cases based on those past cases and then there’s similar work in predictive analytics that’s being done in the operation of the legal system in predicting pricing in matters, predicting budgets and so on and indeed there’s some predictive work being done in government processes.
Sharon D. Nelson: Do you remember the day Michael, when you had all of the companies who worked in AI on a single screen?
Michael Mills: I do, and that map which was pretty simple has grown now to look a bit like the New York City Subway system. There are new branches and there are many, many companies in the space. There’s been lots of investment in advanced technologies and particularly AI in the profession over the last decade. Not only by the big players whom I mentioned who were kind of invisible for a number of years and then burst forth with products like West Analytics and others Fastcase for example, the Casetext, many of them suddenly appeared.
But there are now people doing highly speculative work in AI and the law. A company in Toronto for example called Blue J Legal is predicting case outcomes based on analysis of prior cases.
I think in addition to the predictive analytics, there’s a tremendous amount of attention being given to the analysis of contracts. In a way you can think of it as applying the techniques of electronic discovery to the review not of correspondence and other documents that are relevant to a litigation, but of contract. If you have a portfolio of real estate leases and you want to understand what’s in them, you’re doing it due diligence exercise, you want to know where the risks are.
You can use techniques that are under the hood very similar to the techniques of e-discovery to do that work of reviewing documents effectively in the corporate space with tremendous efficiency gains. Lots of investment, lots of new players, lots of now I think will proven approaches to doing that.
Natural language generation, a technology that is kind of at the leading edge of AI has quite recently come to be used in the legal profession. There are some folks who are doing interesting work, actually drafting answers to complaints based on the complaints in specific areas of the law and there will be more of that over time.
And then finally the area where my own company operates, Neota Logic, and we were with Oracle, I think the only two people in the space at the beginning of the decade and now there are six or eight or more folks, a category that we call expertise automation just to help people work their way through relatively routine legal problems and the processes in an efficient way.
And finally, quite recently, some of the core suppliers of document management, technology to the legal profession have begun to embed AI into their tools in the same way that the legal research folks have embedded AI tools to make their research products better. You can see the document management vendors beginning to improve their products particularly search capabilities and those products with AI techniques.
Sharon D. Nelson: Well, let’s talk about the American Bar Association and its Resolution 112 involving AI and ethics, which was passed on August the 12, 2019, it’s short. So why don’t you read it aloud Michael and then tell us why you think the ABA felt the need to address this issue.
Michael Mills: Sure. In the formal language of the ABA, we’re going to start by saying we resolved and this is the method the individual sections used to present things to the formal ABA for decision. So, resolved that the American Bar Association urges courts and lawyers to address the emerging ethical and legal issues related to the use of artificial intelligence in the practice of law including one, bias explainability and transparency of automated decisions made by AI; two, ethical and beneficial usage of AI; and three, controls and oversight of AI and the vendors that provide AI.
That is in the usual kind of constitutional language of Bar Association resolutions quite general, but it does set a framework and set a priority that is new to the formal bar in addressing these issues and in that respect is a tremendous step forward because these tools are being used quite broadly across the profession, they’re being introduced sometimes when we don’t know it background of a legal research system and they are being introduced in a very explicit and intentional way when you pick up a tool to do something in contracts as you can do an electronic discovery.
So for the Bar Association to recognize that this new technology is an important step forward. One can in a sense say this is already covered by the requirements of the disciplinary rule that say that lawyers must be familiar with current technology, but this is a sufficiently distinctive and important and rapidly advancing technology that a resolution dedicated solely to that topic seems very valuable.
(00:10:05)
Sharon D. Nelson: Yeah. I thought the same thing and I was glad to see that that they’re undertaking that and I’ve actually had the honor of being appointed to the working group on this subject. So it’s been very interesting and it’s very animated.
Michael Mills: I think that working group is important because it’s fine to adopt the resolution and urge the profession to pay attention to AI, but there is rapid change in the field and there’s lots to learn, lots to think about many issues to cover. So the commitment of the bar to the working group and to the continued study of the technology and its role is I think even more important in a way than the resolution itself.
John W. Simek: Well Michael, as you said and pointed out that the resolution itself is kind of short in general, but there was a report that accompanied the resolution. Can you talk a little bit about what you think some of the most important points in that report were?
Michael Mills: Yeah. The report is a thoughtful 15 page overview of the set of problems that should be considered. I think there is in all of the discussion in that ABA report and in the profession generally, some difficulty in distinguishing between the issues of AI and ethics that affect the society as a whole and those that are particular to the law, you can see that in the works of two members of the Susskind family.
Richard Susskind’s most recent book, ‘Online Courts’ talks a good deal and the sections on online judging about the impact of AI. His son Daniel has just published a book “A World Without Work”, which deals with the larger social questions that AI presents.
And those are occasionally talked about in the law there a few years ago as AI first began to be visible in the legal profession, there were lots of kind of silly articles about robot lawyers and jobs disappearing and will there be any associates in the future and so on.
Those are at the societal level large and significant questions and Daniel Susskind’s book is one of the — certainly the newest and one of the first rate books on that topic. But I think for the ABA, we need to focus on the narrower issues of how this affects the profession and I think within that we need to look at three different sets of people who interact with AI.
First you have government actors, you have courts, parole officers, sentencing judges, agencies issuing benefits, all kinds of governmental actors who are increasingly using AI tools and there is a set of ethical and legal issues about those uses of AI.
There’s a set of uses for practitioners in the law, lawyers who are representing people both in corporate context and an individual client context. And finally there’s a set of issues about end users we could call them clients or citizens or members of the public who are interacting with AI based systems and each of those presents a different set of issues and I think the ABA report accompanying the resolution highlights each of those in useful ways.
I think the great value of that report is that it sets the boundaries or — not the boundaries I should say, but sets a framework, a beginning agenda for the working group over time that agenda is going to evolve as the technology moves along and as understanding of many intersections with the profession along, but I think it’s principal value is in setting that framework for discussion.
There are I think perhaps this is a point we will get to is the particular ethical rules of the profession that are impacted by or intersect with AI.
Sharon D. Nelson: Well, let’s talk about the ethical rules, which are cited most often in connection with AI and these include confidentiality, competence, communication and supervision of other lawyers and non-lawyers. So want to take a stab at telling us why those are so relevant to AI?
Michael Mills: Sure. Let’s just take them in that order. Confidentiality is, of course, always an issue with our profession. We have an obligation greater than most businesses, I guess, health care is heavily regulated by HIPAA.
(00:15:00)
But our profession values confidentiality for a whole variety of historical and very, very good reasons. And indeed you can see in our profession’s adoption of other technologies are extra sensitivity to the issues of privacy.
I think it’s apocryphal, but because I’ve never been able to find a citation that’s good, but it is a good story that when telephones were first available commercially in the city of Philadelphia, the local lawyers raised a question about whether it was appropriate for lawyers to use the telephone to communicate with clients or with each other on the ground that those communications might not be confidential that after all those in the early days were operator assisted phone calls and that operators listening in might vitiate the privilege. So the profession was at least the story goes quite resistant to using telephones.
In the era that I grew up in, the arrival of electronic mail in the profession, I think was dealt with in the same way. There were early somewhat ill informed opinions by some state bars that there were significant ethical risks for lawyers in using email, there were certainly reluctance to do that. One concern was confidentiality that these messages were going out into someplace the user who knew where that was, passing through servers and computers that the lawyers didn’t control and might raise a concern about reaching the privilege. And I’m not talking about the infamous reply all that goes to everybody or mistaken addresses.
But the profession did respond to that in a serious way and I think it’s illustrative of the extra attention that the profession needs give to confidentiality in AI. Once the profession decided that it was all right to use email then you found every law firm in the world putting a long disclaimer at the footer of its email saying this thing is privileged and if you receive it and you shouldn’t have, you should return it and you shouldn’t read it and so on; whether those have either operational or legal significance is uncertain, but it does reflect the profession’s concern and legitimate and rightful concern about confidentiality.
And then in many law firms that I know of once they discovered that lawyers didn’t always understand the difference between reply and reply all. Many law firms modified their email systems to make it harder to reply all and many law firms put in little attachments to their legal — to their email systems that would when a lawyer was writing to someone outside the firm and attaching a document pop up another dialog saying, do you realize you were writing to so-and-so who is not in this law firm and you were sending this document, is that what you really want to do? And yes I do.
That extra confidentiality attention is I think what needs to be brought into the world of AI because most AI systems, not all but most, operate in the cloud that’s where the computing horsepower is readily available — cost effectively available to do interesting things and if lawyers are not attentive as they were in the case of email to confidentiality they may get into trouble.
The second of the ethical requirements competence is in my view the most important and the most difficult. If we are going to use very complex advanced tools to serve our clients or to serve the cause of justice, we need to understand those and we don’t need to understand them at the level engineers do, we don’t need to be able to build any of these tools.
But as a senior lawyer taught me when I was a very young lawyer. I need to be able to cross-examine an engineer at a level that elicits what that engineer knows and doesn’t know and what he can defend and not defend. That seems to me a pretty good way of describing the level of competence that lawyers need to have about these technologies.
You need to be able to ask intelligent questions, evaluate the answers, request for answers when they’re fuzzy and then be able to make a judgment about whether a particular technology or a particular vendor is suitable.
Communication, always important. Indeed, if you look at disciplinary complaints to bar regulators across the country, the lack of communication is the single most common ethical complaint source of disciplinary proceedings, you need to talk to your clients about what you’re doing. We have progressed to the point that you don’t need to get your client’s consent to use email, but you do need to get your client’s consent to use an AI based system.
(00:19:59)
And then finally supervision I think that’s really a version of competence, you can be competent in making the evaluation but then, you then need to pay attention as the system is actually being used and that too is seen in disciplinary complaints, failure to be diligent is again, one of the most common disciplinary complaints and that often in the case of AI means you turn it on and you don’t pay attention.
We need to do that.
John W. Simek: Well, before we move on to our next segment, let’s take a quick commercial break.
[Music]
Sharon D. Nelson: 10 years ago eDiscovery meant lawyers packed into a basement, fumbling with complex slow software, wondering where their lives had gone wrong. Today not much has changed. That’s why Logikcull is putting an end to eDiscovery. Logikcull is simple, powerful, instant discovery software, designed to make you hate document review less. Create a free account today by yourself, with no human interaction at logikcull.com/ltn. That’s logikcull.com/ltn.
[Music]
Sharon D. Nelson: Welcome back to Digital Detectives on the Legal Talk Network. Today our topic is “The Intersection of Ethics and Artificial Intelligence”. Our guest is Michael Mills, the Co-Founder and Chief Strategy Officer of Neota Logic.
John W. Simek: Well, one of the huge ethical issues that we are faced with is bias and transparency, which really involves Model Rule 8.4(g) that says, it is professional misconduct to engage in conduct that a lawyer knows or reasonably should know is harassment or discrimination on the basis of race, sex, religion, national origin, ethnicity, disability, age, sexual orientation, gender identity, marital status or socioeconomic status in conduct related to the practice of law.
And I’m sure you’re familiar Michael with one of the most famous examples involves a piece of software called COMPAS which is used by quite a few courts actually in predicting the sentencing right, the likelihood of recidivism in criminal defendants and there have been several reports that essentially said that this is black box stuff and it really doesn’t work. We don’t know what’s going on and they don’t get it right the majority of the time.
So, can you comment a little bit on that?
Michael Mills: Yes, I think this is an example of the use of AI by government actors where it is particularly important because that’s being imposed on the world around us, those who are the subjects of that government. My own view, may be a somewhat idiosyncratic one is that for these kinds of tools that are being used by government agencies, government actors to make decisions that have significant impact on us as citizens and other residents of this jurisdiction, those tools need to be open-source.
And I’ll use an analogy to the COMPAS software, it is voting software. Voting software is mostly proprietary and that means that none of us is in a position to evaluate whether voting software and election security is in fact adequate. I think the great service that ProPublica did in its very detailed analysis of COMPAS was demonstrate that without transparency, these systems are dangerous.
And that’s perhaps an extreme view but it is mine and I think that profession as not users of these systems, but as upholders of the values of law, the larger values of law has an obligation to address these things even if the profession is not itself using these or engaged in using.
Sharon D. Nelson: Well, I know John and I couldn’t agree more. We do not like Blackbox AI. Let’s move on to the independent high-level expert group on artificial intelligence which the European Commission set up. It published ethics guidelines for trustworthy AI. What do you think of their guidelines? Tell us a little bit about them.
Michael Mills: I think they’re a very valuable step. There is work going on at the European Commission which is the one you just referred to, at the OECD, at an organization in California, there are high-level ethics considerations and discussions and examinations going on across the industry.
There’s a company called OpenAI which was founded by some people in the industry in part with an idea that these things should be — not only discussed but implemented in open ways. The European Commission Guidelines and the report that accompanies them I think are very well done.
As is always true with the Commission, they start at the fundamental level, what are the rights and the human rights we’re trying to protect here.
(00:25:00)
The most useful to me part of that report is in Section 3, the early draft they call it a Pilot Version Checklist for Trustworthy AI, the assessment list, which proposes 20 or 30 questions that one wants to have a look at when evaluating a system. Some of them are at a very high level, a level of social justice in a sense and it’s unlikely that a practicing lawyer making a decision about which contract and analysis tool is going to need to answer some of those social justice questions.
But it is still a very valuable checklist for looking at a particular system.
John W. Simek: Michael, I’m a little skeptical because a lot of the folks out there, they love to use the term AI because it sounds sexy and it’s really marketing kind of a thing. But do you really think that ethical AI is even possible?
Michael Mills: I suppose you could start with a fundamental question, is ethical behavior possible? Let’s make the optimistic assumption that it is. I think ethical AI is possible if you recognize the limitations, because AI in this context is largely defined as doing things that humans would otherwise do then we need to be realistic about how we can evaluate and measure what the systems do.
But that’s also true of human processes. There’s a computer science that’s too early on and the discussions of that AI said yeah, look I’m interested in AI but what I’m really concerned about is natural stupidity, yeah, humans make lots of mistakes and just because I have written an algorithm, doesn’t mean that I’m doing away with human mistakes.
We all not only make mistakes but we also all make probabilistic judgments. We make uncertain judgments. But I think that if you look at the three principal sources of problems in AI you can be — you can advance toward an ethical approach by looking first at the data that you’re using to train the system. All of the systems we’re talking about here in the law are supervised learning systems in which a large data set has been analyzed, people have made judgments about which bucket things fall into and then the algorithms have been trained to replicate that process.
If you look at the data, you will find — you may find anyway real evidence of bias or incompleteness, you need to look very hard at the data. You need to look at the algorithms. I’m not suggesting that lawyers are going to be able to evaluate Gradient Descent decisions that engineers have made, but you need to at least pay attention to those algorithms.
And I think it’s fair to say you also need to pay attention to the engineers who are the folks who are designing this. If you’re building a system that is designed to answer questions about say gender or gender discrimination, if all of the engineers on that team are men then I would ask some serious questions about whether the algorithms are going to be evaluated fairly and sensibly, even unconsciously.
So I think ethical AI is possible but it requires real rigor, after all, ethics in the law requires real rigor and hard work. It’s extra true in the case of AI because we’re talking about things that all but the very advanced mathematicians don’t truly understand, even the fanciest scientists at some of the AI companies will tell you, you know, I don’t actually understand how this algorithm works.
I built it, it’s running on its own now and I can’t backtrack to tell you how it made that decision. So it’s hard but it’s valuable and in many contexts in the law, the decisions made by those systems will be not an alternative to a human decision, much less a good human decision, a fair fully factually founded human decision, but will be an alternative to no decision.
People who do not have the resources to hire lawyers, people who have no access to courts will benefit from these systems. We as those who build them and design them need to be careful and rigorous and honest and transparent in what we do.
John W. Simek: Well, in Isaac Asimov’s ‘I, Robot’ book which is really just a progression of AI stories, our future in the AI world is a dystopian one. Michael, when you look at your crystal ball, what do you see in the world where AI is everywhere?
Michael Mills: I don’t, I don’t see a dystopian future for reasons that are attributable to AI. There are plenty of concerns about the long term future, but it is not driven by AI. We have significant privacy issues and I think those are really at the foundation of the concerns about AI, not so much the technology itself.
(00:30:07)
The enormous quantities of data that is now available for these machines to chew on, we should be paying attention to that data where it comes from, how it’s used, who owns it, what benefits there are from it and who benefits from its exploitation.
The technology itself I don’t think is the issue. AI is increasingly everywhere. There’s an article recently about from a lawyer somewhere saying, we really need to plan for an era in which the directors of corporations are robots rather than human beings. I don’t think that’s terribly realistic but AI is showing up everywhere.
I bought a new refrigerator recently and the manufacturers of the refrigerator said, they were using AI. I have no idea how or why and I’m not sure that I want them to, any more than I want — I want my refrigerator to tell me that it’s empty, but AI is going to be everywhere in the sense that smart algorithms are being embedded in smaller and smaller devices, in more and more devices. Well, I think that’s largely beneficial. It is mostly about productivity and productivity is a good thing, productivity is the only thing that advances human well-being.
John W. Simek: I think Michael they would be slap that AI label on your refrigerator to get that sticker price higher.
Michael Mills: They probably did.
Sharon D. Nelson: Well Michael, thank you for being our guest today. I always enjoy our conversations and happily we occasionally get to see you in person and have those conversations, but I know you said that AI and ethics was a great and interesting topic and I hope you enjoyed boning up on some of the topics we suggested within the podcast, but you did a just a great job with it and thank you so much for taking the time.
Michael Mills: Happy to have done it.
John W. Simek: Well that does it for this edition of Digital Detectives. And remember you can subscribe to all the editions of this podcast at legaltalknetwork.com or in Apple Podcasts. And if you enjoyed our podcast, please rate us on Apple Podcasts.
Sharon D. Nelson: And you can find out more about Sensei’s digital forensics technology and cybersecurity services at senseient.com.
We will see you next time on Digital Detectives.
[Music]
Outro: Thanks for listening to Digital Detectives on the Legal Talk Network. Check out some of our other podcasts on legaltalknetwork.com and in iTunes.
[Music]
Notify me when there’s a new episode!
Digital Detectives |
Sharon D. Nelson and John W. Simek invite experts to discuss computer forensics as well as information security issues.