Northwestern professors John McGinnis and Larry Birnbaum discuss computer intelligence, emergent technology, and its effects on lawyers and the law.
Larry Birnbaum is a professor of Electrical Engineering and Computer Science at Northwestern University’s McCormick School of Engineering. His...
John O. McGinnis is the George C. Dix Professor in Constitutional Law at Northwestern Pritzker School of Law. He...
Daniel B. Rodriguez was appointed Dean and Harold Washington Professor at Northwestern Pritzker School of Law in January 2012. Before...
In this episode of Planet Lex, host Daniel Rodriguez speaks with Northwestern Pritzker School of Law George C. Dix Professor in Constitutional Law John McGinnis and Northwestern University’s McCormick School of Engineering Professor of Electrical Engineering and Computer Science Larry Birnbaum about emergent technology and its effects on the law. Dan opens the interview by reminding everyone that it has been 10 years since the publication of Raymond Kurzweil’s book, “The Singularity is Near: When Humans Transcend Biology,” and poses the question of exactly how close we are to the day when computer intelligence surpasses human intelligence. Larry shares his belief that the singularity is coming, though the time table is unknown. John agrees and states that the victory of IBM’s computer system Watson over its human competition on Jeopardy shows the accelerating technology and that software and connectivity are improving, in addition to hardware computation. Both guests analyze how this technology might significantly impact intermediary positions within the workforce and consider the implications on the practice of law. Larry concludes that there are many aspects to lawyering and that you have to break down those segments to figure out which will or will not be adversely affected. John does not anticipate emergent tech putting lawyers out of business in the near future, but does think it is the beginning of how society integrates some of that tech into the work that humans will still do and feels that we’ll see a long evolution of progress in this area. Both guests evaluate the ways in which tech innovation might improve overall equality in society. They close the interview with an analysis of how difficult it is for our regulatory structure to keep up with advancements in technology and the issues present in making the risks associated with these advancements understandable to people.
Planet Lex: The Northwestern Pritzker School of Law Podcast
Technological Advancements and The Law
Intro: Welcome to Planet Lex, the podcast of Northwestern Pritzker School of Law, with your host Dean Daniel B. Rodriguez, bringing it to you from Chicago, Illinois. Take it away Dan.
Daniel B. Rodriguez: Hello and welcome to this episode of Planet Lex, podcasting from Northwestern Pritzker School of Law in Chicago, Illinois. My name is Dan Rodriguez, your host.
Delighted to be joined by two special guests today to talk with us about technology and law, law and technology, an impossibly broad topic, but one that I know they will be able to shed some very interesting light on.
So from New York, my colleague, friend, John McGinnis joins us. He is the George C. Dix Professor in Constitutional Law, here at Northwestern Pritzker School of Law, and he is the author of, among many other works, a fascinating book from 2013, ‘Accelerating Democracy: Transforming Governance Through Technology’.
I am also joined here at the Law School in person by Larry Birnbaum. Larry is a Professor of Electrical Engineering and Computer Science at Northwestern’s McCormick School of Engineering and Weinberg College of Arts and Sciences. He is also the Chief Scientific Advisor of Narrative Science, a leader in advanced natural language generation software.
So let me begin by noting an anniversary, if you can believe it, it’s coming on 10 years since Ray Kurzweil, the Scientific Advisor of Google, I guess that’s his title among others, wrote ‘The Singularity Is Near: When Humans Transcend Biology’ and so now 10 years in I guess the question I want to ask is, how near are we?
Kurzweil of course brings together a number of themes; I don’t have the temerity to try to summarize them, but he notes of course some of these laws, not real laws, but principle, Moore’s law, the capacity of integrated circuits to grow exponentially, to double in power every two years, and what he calls the law of accelerating returns. And from all of that, from this depiction of developments in science, particularly in technology, predicts singularity, where machine intelligence will surpass human intelligence.
So whether focusing on that principle or the general themes that he proposed now 10 years into it, let me first ask this question to Larry and get John’s view, how near are we?
Larry Birnbaum: Well, I used to think that Kurzweil was full of it, but after a while I realized he was probably right. The timing may be a little bit off. I think Herb Simon predicted in 1960 that a computer would be world chess champion in 1970, and it was not until around the mid-90s that Deep Blue defeated Kasparov, but I don’t think being off by 25 years is that big a deal in the grand scheme of things. So I at least think its coming.
Daniel B. Rodriguez: So it’s a question, so I understand your answer as when rather than whether.
Larry Birnbaum: I think so, yeah.
Daniel B. Rodriguez: John, would you venture a view? I know, like me, it’s maybe a bit out of our comfort zone, but you have read a lot and write a lot about accelerating technology in democracy, do you have a view about 10 year anniversary of Kurzweil’s prediction?
John McGinnis: I agree very much with Larry that we are not quite sure of the dates, but I do think we do see acceleration. One I think aspect of that was Deep Blue beat Kasparov in 1997. Watson beat the best players at Jeopardy in 2011, and that strikes me as a really — a big change, because Deep Blue could beat Kasparov at chess, it didn’t have many other uses.
Watson, on the other hand, has become a whole division of IBM and shows the expanding scope of the accelerating technology. It also I think shows importantly that it’s not only hardware computation, which is exponentially increasing, but that our software is also improving and our connectivity is improving. All those aspects were used in Watson.
And so it’s actually at least to, again, a layman, it’s this combinatorial aspect, the aspect of software connectivity as well as computer hardware all are moving very rapidly together, that I think seems to promise some very dramatic changes I think even before we get to the singularities. We don’t ever need to reach that before I think we are going to feel the acceleration in all aspects of life, including law.
Daniel B. Rodriguez: Well, here is work, so let’s be clear about what Kurzweil and others writing in that genre did not say or did not purport to say, which is very much at all about our public institutions, about the role of law and legal rules and all of those issues, which is not to say that they were either agnostic or ignorant about those issues, but as you have presented in your book ‘Accelerating Democracy’ and written about, and others as well, the question of the rate of technological change may exist as a given, but what we do with that technology, the consequences intended or unintended of that technology, the human software, as it were, is a big piece of that.
So with that, let’s turn — we are going to get to the impact on law part, I promise, but before we get there I want to talk about the larger question of the impact of this rapid technological change which we all observe on the workforce. And as sort of a way into that and as a bit of an infomercial for Larry Birnbaum’s work in connection with Narrative Science I would like to ask him to take us through a little bit about that company, because of course it directly deals with this fascinating question of the impact of technology on at least one very important aspect of work.
Larry Birnbaum: Thank you Dan. So the company’s work or business really is to make data meaningful by rendering stories that people can understand, and in a lot of ways I think we started out doing sports stories and business stories in media, but really it was very clear soon that the value proposition of the technology was with all the data that are being gathered around us and analyzed, for the time being at least, that the last mile of decision making is going to be held by human beings. And so in order to make it worthwhile to gather and analyze all these data, you have to actually take the results and put them in a form that people can understand.
And this is one of the reasons why visualization became such a hot topic in computer science, but we do what you might call verbalization or narratization. In other words, we take these data and transform them into meaningful stories that people can understand.
The results, I think that’s a — I mean, obviously I think that’s a tremendously valuable thing. It’s not the kind of thing that I think people originally thought computers would be doing. It seems to be the kind of thing that you often hire human beings to do.
That said, I think in a lot of ways what the purpose of the company is, is to really personalize this, is to do it at a scale, and this is really what computers always enable, is doing things at a scale that was never before possible.
So we are not talking about — although we do do some media stories, for example, quarterly earnings previews for HYPERLINK “http://www.forbes.com” forbes.com and Little League stories with a company called GameChanger, which I love that application. But really I think the latter points towards the personalization of writing a story for you right now about the data that are relevant to you. That’s not something that would ever be feasible using human means, because you would have to have sort of a personal analyst and writer walking around with every single human being in the world.
Daniel B. Rodriguez: Well, you use — so play a little bit of the devil’s advocate here, you use words like personal and personalized and kind of bringing in the human into that. The other side of it is, if and insofar as this effort, so for example, your Quill Program and others achieve some success, doesn’t that run the risk of driving humans out of the business?
So Stephen Norton says, or at least tees up this question in a recent op-ed, he says as AI advances and more machines start making unsupervised decisions, companies will face tough questions about exactly when humans do or don’t need to be involved in decision making.
Larry Birnbaum: Look, I think accountability is going to be a gigantic issue and not just — obviously, I mean everywhere where you are seeing AI applications, self-driving cars or whether it’s machines like the ones we build, software machines that write stories to make data meaningful, I think that what we are going to be hoping for from these machines is a level of scrutability, that we can look into them and understand what they are doing and why they are doing it. Interrogatability, that we can ask them as we would ask a person why did you say that or why did you do that and get an answer that we can understand and then agree with or disagree with, and ultimately perhaps even have a discussion with the machine about what we would have preferred to have seen there or whether we thought that led to the right outcome.
Daniel B. Rodriguez: So just to come back to Kurzweil once again, when he gives the punch line, as it were, by predicting the machine intelligence will trump human intelligence, when I listen to Larry I get the sense that it’s not either or and that the marriage, as it were, for a company like Narrative Science and others is a marriage of humans and technology. So it’s not about machine intelligence outpacing human intelligence about, as it were, Watson defeating humans in Jeopardy; it’s about the bringing together of technology in the human form. I don’t want to put words in your mouth, but it sounds like that we needn’t worry or be alarmed that the humans and the humanity drop out of the equation.
Larry Birnbaum: Yeah, I will leave that to the genetic engineers, but yeah, I think right now we are talking about in a lot of ways complementary strengths and a kind of symbiotic relationship. Of course what results from that might be a kind of — and I think this is one of the things that Kurzweil thinks about or has speculated about as a sort of a joint entity, which may be neither fish nor fowl.
Daniel B. Rodriguez: So let me ask you this question John, or certainly either of you can answer it. When the literature, as it were, talks about the possibility of changing the nature of work; I think the title of, is it Rifkin’s work or someone used this title, ‘The End of Work’ to describe this, and they speak in sort of apocalyptic terms about the replacement of humans with robots.
They use this phrase which really strikes me, which is intermediaries, and the prediction is, look, artificial intelligence and these developments are going to wipe out work, but they are going to be certain intermediaries, folks in the workforce who will be significantly threatened in terms of the kind of work they do. So bank tellers, the need for bank tellers; hotel staff; a variety of kinds of folks in the workforce will be, if not entirely wiped out, will find that those kinds of jobs adequately replaced by machines.
So that tees up of course the question that you have written a lot about, which is whether lawyers in this depiction or this description are intermediaries? Are they more like the proverbial bank tellers and hotel staff or are they acting in a way that is far from intermediaries, they are impenetrable, as it were, to technological change?
John McGinnis: Well, I think it depends. One of the important aspects to understand is that law is a very heterogeneous profession. It’s very broad, lawyers do a lot of things, and so I think you have to break down what lawyers will do. The generalization I would make is that insofar as it’s just a matter of giving information, giving based information, I think lawyers like other experts will be under assault, because that information will not only be made more transparent by computation, it will be well categorized, so that I think is a real problem for a lot of lawyers. For instance, writing a simple will.
On the other hand, if I may just say it, there’s really prediction of humans, which at least machines are not I think quite there, at least humans can add value to that. And then there’s an emotional bonding aspect to lawyering, and I don’t think machines are close to that.
So I think you have to break down the many different aspects of lawyering to see what will be adversely affected and what actually will not be at all.
Daniel B. Rodriguez: So in an essay that you wrote in City Journal recently entitled ‘Machines v. Lawyers’ you provide a very helpful categorization of at least five, and as I listen to you, maybe the list is even longer than that about the various tasks that lawyers play, and sort of deconstruct that in a way that helps us understand where machines might be in the advance and where they might not be. So I certainly want to recommend that essay.
But beyond that, I want to press you on one aspect that Larry’s perspective may be valuable, given his work on Narrative Science, and that’s briefs and memos. So let me see if I have this right. In the writing of legal briefs, in the writing of advocacy statements, arguing on behalf of clients, making a written impassioned plea in favor of a client in a particular case, it’s distinct from a will or a legal form. That has long been seen as the lawyer at their best, using legal principles, obviously doctrine, fruits of research, but it takes the great lawyer human to write that brief and to write that memo to help convince a judge to rule in their favor.
So along comes a program like Quill or something else that moves as an advance beyond writing sports stories and writing other media statements, to writing a brief or a memo and uses all the algorithms and all of the technology available to write a legal brief.
Can you see a near term future in which technology and the development of technology may put lawyers, not out of business, but reduce the need and the efficacy of these pieces, very important pieces of legal advocacy?
John McGinnis: Well, if I could say, I don’t see in the near future certainly putting all lawyers out of business. One way, again, to focus on in is that often disruptive chain comes in at the low end. So I think we could see in very sort of simple briefs I think that would not be that different from some of the things Narrative Science might do today; I will let Larry speak about that, but very clear categories, or very simple tax dispute.
And I would also say that it wouldn’t at the beginning mean that there wouldn’t be humans involved. I mean, humans could use this as an input lowering the cost, because you might see a first draft of that. I believe, for instance, there’s already a program that is in the developing stages, where you are going to write your brief and it will suggest various case law and other things to cite. So that’s a beginning of showing how you might integrate some of the work that humans do now, into work that humans will still do.
So I see it as a progress rather than that we will see anytime soon a very high-level brief being done by a computer. I don’t think that’s the way we will see that happen. We will see a long evolution of progress in this area, beginning with very simple kinds of briefs and beginning with really first drafts of briefs or inputs in the briefs that machines can give and improve and lower the cost of the human product.
Daniel B. Rodriguez: So let me ask that same question to Larry. Are we going to see some version of Quill 2.0 as Narrative Science moves into law?
Larry Birnbaum: Well, I agree with John. I think this is going to happen incrementally as we come to understand the information that’s relevant to writing or decision making into the kinds of reasoning that are necessary in a given domain, like the law.
I want to draw an important distinction, Narrative Science, our technology starts from structured numerical and quasinumerical data and renders that into a story. It doesn’t start with unstructured data such as natural language texts or images, and there’s kind of a big distinction in the kinds of technologies that are used in some ways in these cases.
Most of the work on unstructured data is going to end up using statistical methods right now, and they have a tremendous amount of strength in them, but one of their weaknesses is I would say inability to do long chains of inferential reasoning of the kind that you see in argument, math, writing programs, or for that matter I would say the law.
And so kind of making sure that there is a very clear line of logic is something that’s very challenging for these statistical machines. They can bring together pieces, snippets that are like, hey, this looks relevant, this looks relevant, this looks relevant, but putting the whole thing together and saying globally it all fits together and makes sense, that’s challenging for them.
So I think that the way this is going to progress, and I know this is a long answer, is that underlying pieces of the law are going to have to become more formalized incrementally in order for these kinds of technologies to get a foothold.
Now, speaking as an outsider, I think the law should welcome that. I would say that this is a way of increasing the precision, as it were, of underlying legal principles or underlying legal practice to start thinking about, how do we directly represent the abstract structure underlying a particular kind of contract, for example.
Daniel B. Rodriguez: So John, you make the argument in ‘Machines v. Lawyers’ that maybe this precision needs to be traded off or will in fact be traded off with prestige. And you note de Tocqueville’s observation about lawyers more than nearly 200 years ago, about being aristocrats of America’s democratic regime forming, as he put it, the highest political class and the most cultivated circle of society. But then you go on somewhat ominously to say to some degree engineers, the descendants really of blacksmiths are destined to replace the wordsmiths in societies commanding height.
So do we face the prospect of lawyers being less prestigious as a class in society because of this technological change?
John McGinnis: I think that’s true. In fact, I think we are already seeing that. I think we see that if you look where the excitement in the world is today of people — even applications to law school, one big competitor is the Silicon Valley, and the software world, at least for those who at least are somewhat numerate. So I am already seeing a slight orientation in that way.
And my own sense is that’s likely to continue because of the power and I think the sense of progress that we see there. Law is essentially always sort of a backwards looking discipline, and I think that in a sort of futurist world to generalize it’s going to have trouble in some sense competing for the hearts and minds of people coming up for prestige.
So that would be a long-term prediction of mine.
Daniel B. Rodriguez: So you say that, and again, here I quote from another essay that — well, actually it’s from the introductory chapter of your ‘Accelerating Democracy’ book, that fast-moving technology can be socially disruptive, but then throughout that book and other work that you have written, you give a, I won’t say a rosier picture, but a very positive and optimistic picture of what will be the fruits of that social disruption, just to take one example from many conclusions you draw. You say that innovation, particularly technological innovation, is likely to improve overall equality, the social condition and equality in society. Can you say a couple of sentences about the basis of that prediction?
John McGinnis: Yes. I mean, I know it’s quite a contrarian view, because most people talk about the growing inequality, I think it comes from one central fact, which is that most of what we are doing today is really intellectual property. We are coming up with ideas, and they much more rapidly become the common property of mankind than physical things.
I mean, only one person can own Blackacre, but everyone can consume rapidly the ideas, and one can actually look — I have been actually looking a bit at this, I mean it took a watch more than a century to go from the very heights of society, the very richest people in society to being generally own; a smartphone, it’s less than five years.
So I think one aspect of our inequality debate that’s mistaken is we should really focus the incoming equality, the equality of consumption and the goods that are becoming free and low-priced and available to everyone in society rapidly is actually a big equalizing influence on society, and I think that comes from the basic idea that the world is dematerializing. It’s not material things that are really central to our lives, and once the world is dematerialized, people much more rapidly enjoy the immaterial in common than the material. So that’s my optimistic vision even for equality.
Daniel B. Rodriguez: Just to drill down a little bit on that. That vision, sort of the empirical basis I suppose derives from predictions about certain technologies in this. I read something recently. It was a recollection of an article published in Fortune Magazine in 1955 entitled ‘The Fabulous Future: America in 1980’. It was an essay collection which leading American intellectuals were called upon to make predictions, and the great game theorist John Von Neumann predicted that by 1980 energy may be free, much like unmetered air.
So I suppose you can always find these fanciful predictions, but certainly some of the predictions about it, I would like to get Larry’s reaction to that, some of the predictions about sort of the revolutionary potential of technology have not panned out.
Larry Birnbaum: Well, energy will not be free, but computation will be free, and connectivity will be free. I do want to say something about Moore’s Law, by the way, which I think is — I am not really in the business of architecture, but there is some concern that that is slowing down and coming to an end, that we have kind of reached diminishing returns with the current kind of physical technology that we use to build underlying chips.
So there has not been the same level of growth and drop in costs over the last few years that there was historically when Moore kind of drew that famous line. And there is a concern that we have been kind of living easy because of the fact that everything is sort of doubling in less than two years and costs are dropping by half in less than two years, and that’s provided this tremendous luxury for us that if it’s going away, we are going to have to start getting smarter to keep those games going. So that’s just a heads up about that.
Look, I am tremendously optimistic about the future as well. I think that the level of wealth that automation and AI and computer science generally will be able to produce I think is bound to be good. I mean, it’s bound to lead to — in some sense that’s got to be diffused somehow.
I do sometimes worry that the transitional period may not be quite so easy as maybe John indicates, that as we go through the change to where this wealth is really easily available to everybody, there may be some bumps along the way.
Daniel B. Rodriguez: Well, just to underscore that, I mean, and John, I expect you would agree with this is, even if your intermediate and long-term prediction is greater social equality, go back to the example we were talking about lawyers. Some lawyers will be driven out of business, as will some bank tellers, as will some hotel staff. I mean, you can go down a list, and it’s not a trivial list, right, of jobs that will be eradicated to the point where there will be a fundamental question of what we do about the social dislocation of individuals who were driven out of the workforce.
Now, I know that’s sort of hyperbolic anomaly laden, but I can’t help but asking the question, what do we do about those folks?
John McGinnis: Well, I think that is a serious issue, the transitional issues. I tend to think the solution is, at least in part, technological. I mean, one, I do not subscribe to the idea that there are a set number of jobs. The technology always leads to other new kinds of jobs. I do not think computers will have the human connectivity to humans, at least anytime soon, and so relational jobs I think are going to become important, but that requires training. And there I think education is often high-priced and ineffective, and I think we are seeing some disruption in that area.
And I think we have to encourage that through regulatory change to make that competition more effective, because I think it’s really going to be a question of people being able to reeducate themselves in an inexpensive way, to do new things, is going to be very important, but technology I think can be an aid to that, if we have the right regulations, which of course I do not — I think one of the problems with Kurzweil and some others is that they tend to think — they are not too worried about the regulatory environment, but the regulatory environment can prevent some of these — the fruits of technology from being broadly distributed or from education being improved technologically. So that I think should be a particular focus of policymakers and indeed lawyers.
Daniel B. Rodriguez: Well, you have written about this, some others as well about the nexus between regulatory environment and structure and technological change, and I think it is absolutely essential for just the reasons you said. Let me throw out this as an example of this that struck me as something that is interesting and important.
So you take, for example, perhaps the unintended consequences of technological change. I am thinking in the medical field of the use of antibiotics, and as one important result, difficult result is the increase in the number of germ resistant diseases, where antibiotics are much more difficult with present technology to solve or resolve. So we all know that.
The puzzle is our regulatory structure and thinking particularly of the way in which the Food and Drug Administration and society regulates pharmaceuticals makes it, as we all know, very, very difficult to keep up with the technology and drug-resistant germs, that’s just one among many examples. But I wonder if you would comment on whether that’s — what we do about that. I mean, that’s an example of our regulation being far, far behind what the rate of technological change in fact is.
John McGinnis: Well, Peter Huber, I recommend his book on the FDA and these issues. He thinks there needs really to be a paradigm shift, that the FDA has a requirement of essentially requiring these very large studies to indicate drugs rather than to use the general scientific knowledge to make predictions. And he says that’s essential to make that change rather than to continue with these large-scale trials, because many of the drugs are going to be personalized; moreover, we have more information about the mechanism.
So that kind of complete paradigm shift, which is I think coming from a more personalized medicine, yet isn’t reflected in our institutions, and I think that’s a general problem. We have an institutional lag and I think that comes again from law being, quite understandably, a backward looking institution. We are very focused on precedent and precedent and tradition, maybe less effective organizers of a world that is rapidly changing. And so that’s at an abstract level I think a huge problem for policymakers and lawyers in a world where change happens much faster than in a world 1500 or even 1700, where tradition and precedent may be a much better guide.
Daniel B. Rodriguez: I want to come back to something that Larry said and press him a bit on this, if I may, and that is this debate about whether the technological change will slow down; you use the example of Moore’s Law. You may know this that there’s a raging debate within Northwestern University, but it happens to involve two prominent economists; one is Joel Mokyr and the other is Robert Gordon, who have been doing this, you might say academic road show for quite some time, on the question of whether the rate of economic growth is truly slowing down.
And of course related is — and Larry is here nodding, so I am sure he knows all about this debate, I would like to get your comments on it. But for Gordon, he says, growth will slow down quite a bit and it will be a small percentage of GDP. Harkens back to a comment that John Horgan made in his book in 1996, ‘The End of Science’, which he said, and I quote, “the modern era of rapid scientific and technological progress appears to be not a permanent feature of reality, but an aberration, a fluke. There will be, he says, no revelations in the future comparable to those bestowed upon us by Darwin or Einstein or Watson or Crick.”
This is a big and important debate and I would be interested Larry in your reaction to that as a description of our economic future.
Larry Birnbaum: Well, I think it’s false on both counts. I mean, I have seen the Gordon and Mokyr duo in action, and obviously my sympathies are with Joel Mokyr in terms of his optimism. I think there are — and I think John alluded to this sort of issues in the measurement of wealth or wellbeing that are a little problematic right now in trying to understand kind of what is the standard of living that people currently experience, when so many of the goods that they are consuming actually look like they are free.
But without getting into those kind of technical questions which are really beyond my expertise, I think for anybody in the computer business to look at this and say like, well, if it’s not showing up in the productivity statistics, then honestly there must be something wrong with those statistics, because I mean from our own everyday experience with just — I mean take something relatively trivial like word processing and the level of increasing productivity or email or —
Daniel B. Rodriguez: Or smartphone.
Larry Birnbaum: Yeah, or GPS, I mean it just seems — it seems beyond understanding that these haven’t actually improved the efficiency and productivity of people; of course they have, and if we can’t measure it, then we need to work harder on figuring out how to measure it.
But that having been said, I don’t think things are slowing down. In fact, one of the things I really — computer science is actually at an interesting phase in academics right now because of the incredible flood of students that are coming to major in CS, and actually the numbers of students in our classes are going up exponentially, although they are going to level off at some point now.
Daniel B. Rodriguez: You are taking them away from law schools.
Larry Birnbaum: Yeah. No, really, it’s phenomenal, and we are gratified by it. Of course, we are sort of overwhelmed by it and we are trying to learn how to scale it. I mean, scaling is — teaching is a retail business and so trying to turn that into more scalable business has become something we have thought about.
But these students are not going to this solely for sort of careerist impulses. I mean, I think that they see what we also see that there are times in the development of a field when it has expanded to the point where all of a sudden it’s actually touching so many things, it’s going to expand faster, it’s not going to slow down. I mean, I think computer science has finally reached the stage where its sort of ubiquitous application or applicability is becoming apparent.
So I am incredibly optimistic about all the wonderful things that are going to happen in computer science and the law, computer science and medicine, so that will be great for law and medicine, but also great for computer science and great for humanity I think. So I disagree completely, they are not slowing down.
Daniel B. Rodriguez: You know a question sort of for both of you about I guess it falls under the rubric of the connection between technology and sort of societal opinion. When I google now self-driving cars or Tesla, in the last several weeks I don’t just get a description of Musk and the revolution into self-driving cars, most of the articles are about the tragedy of the gentleman who was killed while being in a self-driving car. And of course then there’s the onslaught of op-eds and others saying, see, we told you so, or this technology has hit a very significant glitch.
And I am sort of curious about your reaction, has something like that, obviously a tragedy, and momentous is judged by the media attention, create an enormous timeout or enormous slowdown because of the public impact and its reputation, because for every tragedy like that something else will happen, there will be a disaster with 3D printing, there will be a disaster with medical technology and all of that. So I wonder if both of you would briefly reflect on, not necessary that episode, but its impact.
John McGinnis: Yes, this actually goes back to something that shows that there is an importance in studying the past. It really goes back to something a famous French economist Frédéric Bastiat said, he suggested that, much of our policy calculations were influenced by the seen rather the unseen. And of course the seen accidents in this case are visible, it is a tragedy, but already actually people have noted that their unseen life saved by autopilot, not only statically what’s happening today, but through the development of autopilot.
So I think this is a human problem quite apart from technology in policy calculus, and I think we really need to actually think about technology in certain ways to make the unseen more visible, so we can make our policy calculations better.
And so one aspect of that which I like to see is sort of prediction markets, which could predict for instance what would be the number of deaths in the United States in driving based on different kinds of regulations; in other words, banding autopilot or letting them through, because otherwise I do fear that some of these disasters by getting a lot of attention may retard progress.
So I think that’s one area of structure for policy, how do we make what is not visible, all these lives saved in this case, more relevant, not only to policymakers, but to the public.
Daniel B. Rodriguez: Thank you. Larry.
Larry Birnbaum: Well, I agree with that as well. I mean, I think a big issue here is going to be sort of making risk understandable to people and maybe quantifying it better so that you understand a little bit sort of what are the parameters of failure of these machines when you get into them.
But one of the incredibly — I mean — and there will be glitches in self-driving cars and everything along the way and they are going to lead to tragedies, I think one of the great things about software and automation is that these errors when they occur, they won’t be random errors anymore or ultimately sort of inexplicable, they will be systematic errors. There’s going to be a pinpointable error in the software that can be rectified, and that particular mistake will not get made again. I mean, other mistakes will get made, but that one won’t get made.
And so I think there’s really a possibility for the system as a whole learning, much more rapidly than any individual driver or any individual mechanism could learn that will lead to pretty rapid increases in safety. How to make that visible to people or how the law has to work so that this can actually happen is really beyond my understanding, but it would be great if it could be.
Daniel B. Rodriguez: Well, both of you contributed to our understanding and I am very, very grateful for that too.
So to Larry Birnbaum and John McGinnis, thank you for joining us, much more to say and much more to come! So that’s our show for today. Thank you again and thank you for listening. I am Dan Rodriguez signing off from the Northwestern Pritzker School of Law.
Outro: If you would like more information about today’s show, please visit HYPERLINK “http://www.law.northwestern.edu/planetlex” law.northwestern.edu/planetlex or HYPERLINK “http://www.legaltalknetwork.com” legaltalknetwork.com. Subscribe via iTunes and RSS. Find both Northwestern Pritzker School of Law and Legal Talk Network on Twitter, Facebook and LinkedIn, or download the free app from Legal Talk Network in Google Play and iTunes.
The views expressed by the participants of this program are their own and do not represent the views of, nor are they endorsed by Northwestern University, Legal Talk Network or their respective officers, directors, employees, agents, representatives, shareholders and subsidiaries. None of the content should be considered legal advice. As always, consult a lawyer.
Notify me when there’s a new episode!
|Published:||October 19, 2016|
|Podcast:||Planet Lex: The Northwestern Pritzker School of Law Podcast|
Planet Lex is a series of conversations about the law, law and society, law and technology, and the future of legal education and practice. In other words, a bunch of interesting stuff about the law.
Professor Uzoamaka Emeka Nzelibe talks about her work representing unaccompanied minors seeking asylum, and her recent trip to the U.S./Mexico border to provide assistance...
Matthew Kugler talks about his research into consumer sentiment around biometric tracking.
Professor Shari Diamond talks about her research on jury process, including her finding that our trust in the jury system is not misplaced.
Steven Drizin and Laura Nirider talk about false confessions and the legal proceedings covered in the second season of “Making a Murderer.”
Kate Shaw and Megan Murphy talk about their experience as lawyers in the media, the many legal issues facing journalists and pundits, and whether...
Michael Kang talks about everything election law, from partisan gerrymandering and the constitutional issues involved in redistricting, to the effects of voter ID laws....