Over the past 20 years or so, lawyers have been simultaneously intrigued and frustrated by speech recognition technology. While different experiences yielded different results, the technology was never efficient enough to be practical. Over time, speech recognition started to disappear off the radar for most lawyers. Recently, with Siri and Google Now, smart watches, and...
Over the past 20 years or so, lawyers have been simultaneously intrigued and frustrated by speech recognition technology. While different experiences yielded different results, the technology was never efficient enough to be practical. Over time, speech recognition started to disappear off the radar for most lawyers. Recently, with Siri and Google Now, smart watches, and improved cloud computing, dictation technology has improved dramatically and is more efficient and accurate than ever. Has it made an unexpected return?
In this episode of The Kennedy-Mighell Report, Dennis Kennedy and Tom Mighell revisit speech recognition technology, look at the ways people are dictating text today, and discuss arguments for and against using it in your law practice. Mighell discusses how faster speeds and improvement in broadband for Siri, Google Now, Cortana, and other tools have made speech recognition a hot topic. He personally uses dictation most often when sending texts from his smartwatch, but he runs into connectivity issues. Kennedy has recently tried dictation again, yielding relatively successful results, but he would not use it to draft full documents. He discusses lawyers’ interest in full-time dictation, using an application like Dragon Dictation, and how younger lawyers come into the workplace with the ability to type faster and more efficiently than any speech recognition program. Both Kennedy and Mighell agree that the success of speech recognition technology depends mostly on social and cultural acceptability.
In the second part of the podcast, Kennedy and Mighell discuss the apparent failure of the Google Glass launch and the emergence of Microsoft HoloLens, a virtual reality advanced holographic computing platform. Where is the future of wearable technology headed and what will be the biggest influences? As always, stay tuned for Parting Shots, that one tip, website, or observation that you can use the second the podcast ends.
Special thanks to our sponsor, ServeNow.
Kennedy-Mighell Report: Revisiting Technology: Speech Recognition – 2/3/2015
Advertiser: Got the world turning as fast as it can? Hear how technology can help – legally speaking. With two of the top legal technology experts, authors, and lawyers: Dennis Kennedy and Tom Mighell. Welcome to the Kennedy-Mighell report, here on the Legal Talk Network.
Dennis Kennedy: And welcome to episode 145 of the Kennedy-Mighell Report. I’m Dennis Kennedy in St. Louis.
Tom Mighell: And I’m Tom Mighell in Dallas.
Dennis Kennedy: In our last episode, we talked about the current state of collaboration tools and technologies. In this episode we want to kick off a new and occasional series of fresh looks at old technologies that many might have forgotten – or they might have just slipped your mind – but now, these technologies might actually be ready for prime time; and we’re going to start with speech recognition. Tom, what’s on our agenda for this episode?
Tom Mighell: Well, Dennis, in this edition of the Kennedy-Mighell Report, we will be discussing the current state of speech recognition technology, how we use it, and some arguments – both for and against using it – in your law practice. In our second segment, we’re going to look at the apparent demise of Google Glass and the launch of the Microsoft – and we’ll begin now saying holo-lense or hollow-lense; we don’t know what it is, so we’ll just use both of them. And as usual, we’ll finish up with our parting shots, that one tip, website or observation that you could use the second this podcast is over. But first, let’s get started on our first topic, and that’s speech recognition. Back when we recorded our resolutions episode a couple of weeks ago, one of our recommendations was to revisit something old and see if it’s new again; see if it makes sense to use something now that before may have been before its time or just not ready for prime time. I think it’s fair to put speech recognition in that category, although we’ve been talking about speech recognition for years. There have been products on the market for years that actually do a pretty good job of turning our words into text. But I think the rise of tools like Siri and Google Now and Cortana on Windows and on mobile devices in particular, have really made speech recognition an interesting topic again – at least for me, anyway. Dennis, what brought speech recognition back onto your radar?
Dennis Kennedy: It actually is a much simpler use than even what you’re referring to, and that is that I was doing some email, and I was doing some texting, and with longer emails, I always have trouble just getting on with the keyboards – the touchscreen keyboards, especially on my Android phone. And I saw this little icon that looked like a microphone on the keyboard, and I hit it, and I dictated a sentence and it got it perfectly, then I dictated another sentence and it got it perfectly. So I found that over a couple of months that on a fairly regular basis, I’m using – both on my iPhone and my Android phone – that speech in email and in instant messaging and it works really well. So it made me start to say, “hey, what is happening in the world of speech recognition that it seems that it’s working so well?” And in the small context of emails and text, it’s actually quite useful and I can do things a lot quicker and it’s not taking a lot of time. It’s a lot easier and quicker than using keyboards so that actually is what got me thinking about this topic, Tom.
Tom Mighell: Well I take a different approach to it which is that I like speech recognition. It has been something that I’ve had a hit and miss with – and we’ll get more into some of the specifics a little bit later – but I found that I actually can type faster than I prefer to wait, and I’m thinking more specifically of using speech recognition tools to dictate documents, to dictate longer things. And I find that I really prefer to use a keyboard because the time it takes me to type and then correct things using a keyboard is usually a lot faster than it would be than to try and correct something if I make a mistake while I’m talking. There will be some times where I prefer to talk text messages, or talk emails if I have something long to say and I’m on my phone. But I will say as an Android user – and frankly, now, iPhone users have this too – using something like Swiftkey to just swipe my messages makes it so much easier to type it out. So I think that it’s really something that depends on how you plan to use, what makes you most comfortable. I don’t want to be ageist here, but I would think that generations that have been less comfortable on a keyboard, less comfortable working on things with keys on it – especially in a virtual screen – are probably more likely to take to speech recognition and like that whereas I’ve seen younger generations just fly with their fingers on a virtual keyboard and I think that’s just because they are natives to that whole environment where we’re more natives to the keyboard and it’s not always a comfortable issue. But I guess I’ll come back, Dennis, and ask you this question, because we kind of put this in a category of something that has been around for a while but something changed to make you want to give it a fresh look again. So is it just the fact that you now find yourself texting with your voice? Is that what made you want to take a look at it or is there something more than that?
Dennis Kennedy: Well, I think that’s a part of it because it brought it back to my attention, but I think it is an example of how technology changed to make something that sort of worked or as people say, maybe 90-95% worked in the past. It seems like it works a whole lot better all of a sudden when you look at it, and I think it’s in illustration of a different way to think about the Cloud and computing in the Cloud. And it’s also an interesting approach, as you were saying, Tom, you’ve got the different swipe things, some people can type really fast on the virtual keyboards, some people might like to dictate. But it shows how we can personalize our devices, so sort of what works best for you is now available. So I think to me what’s interesting is how all of this stuff can be done on the backend in the Cloud and because we have broadbands, we have fast connections, we have all this computing power up in the Cloud, the delay doesn’t seem that much and it’s a reasonable – the increased accuracy I’m seeing – is a reasonable exchange for maybe a little bit of delay in that what we’re speaking gets kind of put up into the internet, into a data center where it’s processed and sent back. And then also a lot going on in a way of learning in connection with speech recognition that can be done in this big server farms that you couldn’t do on your computer back in the old days. So i think that it’s a good illustration of something that’s really basic that’s been around. It seems like lawyers have been experimenting with speech recognition for 20 years but it seems that that Cloud approach of getting the processing of the speech off of the device, or off the computer, is really starting to make a difference.
Tom Mighell: Well, I think that’s certainly not that Cloud approach is not anything new, that’s how it worked. I think what has made it better is, like what you said, the faster speeds, the wider availability of broadband. I remember when Siri first came out, talking to Siri on my iPhone and just sitting and watching that thing spin round and round and round waiting for Siri to get the answer because we were waiting for the Cloud to get it. And the other problem was not just a matter of broadband connectivity but it was a matter of demand on the server and I think that has improved. Siri’s ability to recognize things fast has improved tremendously. I tend to use Google Now for dictating and things like that and it has very fast speeds. I will say though, connectivity is not always as perfect as you’d like it to be. If I’m going to dictate something, I’ve actually found myself dictating using my smart watch more than anything else. So if I’m sitting there – it happens a lot while I’m cooking at night, if I’m getting dinner ready and a text message comes in, it is so much easier to just look at my watch and it’ll show me the message and it will give me an option to say do you want to send a response and I say yes, and then I can dictate the response to the watch, send it off, and I don’t even touch my phone to do it – which is tremendously helpful. But what’s interesting about that is that sending a text by watch depends not only on connectivity to the servers, but connectivity to your phone. So if my phone actually was somewhere other than right next to where my watch was – I’ve gotten messages in the past that have said sorry, we cannot send your text because the watch is disconnected from your phone. So there still are some issues I still find with the speech recognition that I use, whether it’s Siri, whether it’s Google Now; I still notice that proper names are not recognized as well as I would like them to be. Saying things in exactly the right way to be able to get something that you want is a whole lot better, I think that there are still some problems with that. I think what’s interesting is that we really are talking more now in terms of tools like Siri and Google Now but we haven’t talked about the 800 pound gorilla of speech recognition, and that’s Dragon Dictation. Dragon Dictation’s from the company that actually partnered with Apple to develop Siri and I don’t hear about lawyers using it that often. I don’t know if that means that it’s present everywhere, that everyone’s just used to using it, or if it has fallen off as a tool that lawyers are using. I just don’t read about lawyers using it that often, and although I know that some of us – you know our friend Jim Calloway uses it religiously; but I just don’t hear that the adoption rate is as much as I thought it would be in this point in time. Is that your sense or am I just out of touch?
Dennis Kennedy: My sense is that the interest in what I would call full-time dictation, that Dragon Dictation where you’re trying to dictate letters, documents, that sort of thing; I think the interest in that and probably the usage of that has probably diminished over the years. That is a generational thing because I’ve talked to younger lawyers – in fact, someone today was given an example of a firm. He was at a firm that’s known for their experiment where they didn’t want to give associates a keyboard. They looked for ways not to give associates keyboards to a computer because they thought that dictation was totally the most efficient way to do things and they didn’t want associates even to have a keyboard to type. So a lot of younger lawyers just don’t get the whole dictation thing as it comes to documents. And so I think that that to me is an interesting aspect of why speech recognition’s now coming back to the forefront for me. I sort of see it as a learning technology in a way, so all of the stuff that’s going on is this fine-tuning and this predicting, and that’s part of the reason that it’s becoming more accurate with that look-ahead, guess-the-next-word thing that you can do on your iPhone and your Android. That’s getting better and I think that those same principles are being applied to speech recognition. So I dictated a text message to my daughter who wanted to make sure that I was on Viber, which is a communications platform, so I dictated that in a text – and this is an example where I think there’s learning within the system; we were talking about names that don’t do so well. Viber got it exactly right two times when I used it in a text message and I was impressed with that because I could see that there was learning happening on the backend. I had great experience, Tom, as you have, with Google Now. I haven’t done that much with Siri but it’s been good lately. When I think about speech recognition in the old days, you sort of had this holy grail that you could dictate a whole legal document and even at 95% or better accuracy on speech recognition, there were going to be a lot of typos and things that you had to create if you just do the math of what a 95% success rate is, and I think that’s an unrealistic approach. I think what’s happened over time is you start to look at things and say I’m probably not going to dictate a whole legal document word by word, but if I can use voice commands to insert paragraphs and to trigger, certain things happening in my forms, that could be a great way to do things. I think the way people have started to use voice commands in cars has also done a lot of this. So in my car, I can say a name, it finds the name in my contact list and calls that number. So I think it’s that smallness of what you’re doing in the accuracy of that, and speech recognition has become really good at that. I don’t expect it to work well, although there are some examples out there if people do some looking. There is an article out there that a journalist wrote; basically a magazine article that they dictated on their iPhone as they were walking down the street and it’s not that bad, it’s amazingly good; but there are a lot of mistakes in it. So you look at the reasonable expectations of what voice can do well having those discrete tasks, and I don’t know, Tom. For me, the holy grail of speech recognition would be that I’m giving a presentation, I record it, and a speech recognition program does a perfect transcript at the same time – that would be one thing for me. The other thing is I’m riding my bike and I have a great idea and I yell it out to my watch, or whatever, and I capture those things. I suspect, given what we know about speech recognition, that neither of those things are going to be perfect at this time. But I think in other places – and this is why speech becomes so interesting – I think it really can work well.
Tom Mighell: Well I’ll tell you, based on my experience, I think that of those two examples that you just gave: that second example about yelling out an idea to your watch, it’s actually a pretty easy thing to do; I could do that on my phone if I want to. I can wake it up and I can have it save a message or a note or anything that I want and it’s using Google’s speech recognition which is really pretty good. As to dictating long presentations and things and transcribing them, I think that’s something that I’m not seeing a lot of people doing at this point in time and I think that probably the technology is not where it needs to be for that. But I’m going to come back and say alright, these tools are good; we tend to be using tools like Siri and Google Now more often and voice commands. I’m just going to throw it out there and ask you and ask the audience how often do you actually see people using them? How often out in public; is this a private thing that we do? I very rarely see people say, “Okay Siri,” I feel very awkward sitting in my car, saying, “Okay, Google.” One of the nice things about having a Moto X phone – and frankly, right now I just said “Okay, Google,” and my phone woke up because the Moto X actually can do that, and it’s recording my voice as we’re speaking right now which is something I didn’t expect to be happening. But that’s a nice feature, you can actually record your own sentence, you don’t have to say “Okay, Google,” you can say something entirely different. If you’re a real nerd and you like Iron Man, then you can say “Jarvis”, or if you’ve a Star Trek fan, then you can say “computer.” You can add whatever you want to it which I think is a nice way to look at that. One way that I think that speech recognition is really triumphing now – and this is a recent story, this is a recent thing that I am just dying to get over seas so I could try it – is with Google Translate. They are coming much closer to this idea of the whole universal translator because they have updated the app now to where you can actually have a conversation with someone in another language and just hand it back and forth where you don’t have to switch out of the languages. So you can set it in advance to say, I’m getting ready to speak to a German person, you can speak in English, the English will be translated to German, the German person can speak in German, it’ll be translated to English; and you can go back and forth and have an actual conversation. I think that’s just a huge leap forward in terms of the ability to transcribe and to translate that information. They weren’t satisfied with just spoken word, they’ve also finally incorporated the Word Lens app that they bought a while back so that you can just point your phone at any text and it will automatically translate that text into English or into whatever language you want; I don’t know if it has a limitation on the languages. But just being able to point it at a sign and know what that sign is saying is pretty cool. The Word Lens app, when I used to use it on my iPad, was a great app. Dennis where do we think we’re headed here with speech recognition? Is it more ready for prime time, do we like it? What do we think about it’s future?
Dennis Kennedy: I think that when you go to this notion of what I would call “right-sizing,” then also the job’s speed notion, what are you hiring speech recognition to do and being reasonable about that; I’m really intrigued by the potential. I know there’s a lot of buzz around something called the Amazon Echo, which to me seems to be a voice command device as best as I can understand, that will come out later this year. And then, I think there is the thing about the social context, but remember that we thought that nobody would talk on their phones in public, that that was too weird. And we also thought that people wouldn’t wear earphones and headphones all over the place and they do. I sort of think the car is maybe what helps people with this because although you say it’s weird to talk to a computer, or to say, “Okay, Google,” in your car, the fact is we see in zillions of movies and TV commercials – and I see on the road more often than I can think – people sing along at the top of their lungs in cars. I think that cars is where you’ll get it and there will be some other things. I think that translation was another example I had where again, a very specific job – and where I think this can work, where you say I’m going to dictate my next novel – I think that’s a harder thing. And then we also know that there are some huge issues as we’ve learned even from the early days. If you’re in noisy rooms, there’s a lot of other things. Things obviously aren’t going to work as well because there’s a lot that has to be overcome. But for certain things, especially where you can teach it, or the technology starts to learn, I think there’s a ton of potential. I’m finding myself surprisingly thinking that this could be a really hot area over the next 2-3 years. There are some social and cultural things to talking in public to a device, but that may not be the way people really use it. I’ve also looked at some of the apps, Tom would we do when we order up an Uber, I think it might be easier if you could just use speech to order up that Uber car.
Tom Mighell: Well I think we’re getting there. My thought is a little more reserved than yours that I think that speech recognition is a lot better than it used to be. I still think that there’s room for improvement. I don’t know that I’m quite there with you that it’s something that’s going to be hot over the next 2-3 years, but I’m willing to see it and hope that that’s the case because I think that it’s really pretty interesting. But I really do agree that speech recognition to me is only useful if you have a job to be done. If there’s something you use it for – I tend to use it for speech recognition for basic commands – for text messages, for dialing a phone in the car, my phone will allow me to start another app. Why would I want to start an app up with my voice when in two seconds I could open it up with my finger. it’s sort of a tradeoff, and again, it’s sort of going to be a very personal decision, which I suppose makes it an interesting technology because it’s something different to everyone.
Dennis Kennedy: I think that to the extent we continue these types of episodes – and I think we will – I think the advice is always go to be to people just go back and try some things and try some simple things. Even try dictating one paragraph in an email or just try a couple of things along those lines. So that would be my wrap-up as hey, this would be something to go back to and try.
Tom Mighell: Or even just press that microphone button on your iPhone and do a text message by voice. It’s pretty easy and it’s a cool thing to do. Before we move on to our next segment, let’s take a quick break for a message from our sponsor.
Advertiser: Looking for a process server you can trust? ServeNow.com is a nationwide network of local, prescreened process servers. ServeNow works with the most professional process servers in the industry. Connecting your firm with process servers who embrace technology, have experience with high-volume serves and understand the litigation process and rules of properly effectuating service. Find a prescreened process server today. visit www.ServeNow.com. We’re glad you’re listening to Legal Talk Network. Check us out on Twitter, Facebook and LinkedIn too.
Tom Mighell: And now let’s get back to the Kennedy-Mighell Report. I’m Tom Mighell.
Dennis Kennedy: And I’m Dennis Kennedy. Google Glass was launched with much fanfare a few years ago with mixed success, at best, and so Google pulled the plug on it recently. At almost the same time that Google pulled the plug, Microsoft was announcing its new HoloLens, a virtual reality, advanced, holographic computing platform; which is how it’s described and what it really is. To go back to the old days of virtual reality, it’s basically this wrap-around device that covers your eyes and lets you see things in 3D. It’s grabbed a lot of early attention, Tom. So what are your thoughts about the end, if it is, of Google Glass and the appearance of this new HoloLens.
Tom Mighell: Well you added an important disclaimer right there at the end that said, “if it is the end of Google Glass,” because I don’t know that we can really be sure of it. And maybe for our audience’s sake, let’s make sure everybody knows about Google Glass. Google Glass is something that came out two years ago; Google wanted to introduce a wearable that fits like a headset but like a pair of glasses but around the right eye, there’s a small little monitor that you can see a small screen and a display. You can see a map if you need to take a map, you can take pictures with it, you can take video with it, you can receive text messages or read emails. It was designed to go where you go and see and do what you’re doing. And I think that the words coming out of Google recently, didn’t really say that they’re putting an end to it. They said they’re pulling out what product they have right now, but I’m not totally convinced that they are done with Google Glass. Many people believe – and I tend to agree – that Google Glass was a wrong device at the wrong time kind of thing. I don’t think people were ready to accept something that goes on your head until they got used to something, just the idea of wearable technology in general, and that’s what’s happened with smart watches. You had people mocking Google Glass; they would call people wore Google Glass, “Glassholes.” The theatres and other venues were prohibiting the use of Google Glass because of its ability to record people or movies. People were getting traffic citations because they were using Google Glass while driving. All sorts of privacy concerns about whether people would be recording you with their Google Glass and you wouldn’t even know about it. Now I think that things are finally happening in the right order; I think smart phones were a better, logical next step in terms of wearables. The idea of having that Dick Tracy watch is something we’ve been thinking about for a long time. I think it’s intriguing to more people than having something around your head and I think it makes a lot more sense to be accessing technology on your wrist; whether it’s a watch, or a fitness tracker, than something that’s sitting on your head. As a result, I think smartphone sales are pretty hot right now. They’re pretty well and I think that’s even before the Apple watch comes out in which I think they’re going to go crazy. I’m going to step back now and say that that’s all I have to say about Google Glass. I don’t know what views to talk about HoloLens because I think you may have a different take on it than I do.
Dennis Kennedy: Well, as I read about the HoloLens, what became interesting to me was the comparison with Google Glass where Google Glass is this idea hat you’re wearing it all the time and it’s feeding you information, maybe it’s doing these other things. But that sort of all-the-timeness about it was maybe a drawback for such an early-stage product, whereas the HoloLens is really designed to be an immersion experience where you’;re wearing it for a much shorter period of time to actually do something; it’s almost as if you’re focused on something. And as I start to think about that and this 3D-augmented reality notion of it, it just became really interesting to me to say I could see this type of approach working if it was connecting to what somebody was seeing in a microscope or if they were in connection with the telescope. Some people talked about plumbing repairs or other things like that where you would be able to see what it is that was going on with instructions and still maybe have some way to actually do those things. So that immersion aspect of it, this shorter time-frame, and the fact that you can see in 3D this hologram notion, and to me the prospect of that is put together with information, directions, enhancements, background, that sort of thing. So that really does become both a virtual reality in a sense of the early days of what you’re seeing in a 3D world, and the augmented reality of what you’re seeing is also overlaid potentially with other information. And that’s what I think is really exciting about this. Again, with all first-generation stuff, you never know exactly how it works; but boy, there was a lot of excitement around that Microsoft announcement.
Tom Mighell: Well I think that in some ways it’s unfair to compare these two devices because I think that they both are intended to do completely different things. I’m not sure if I completely agree with you when you say that Google Glass is designed to be worn all the time, but when I compare it to something like a smartwatch, which is something that could be worn all the time, that seems more natural to people, that they would wear a watch all the time and have that information available to them or have access to whatever they have. I think that wearing something like Google Glass all the time would be a little awkward, but I view it differently. I view it as being more of a practicable device. It can be part of your daily activity because you can navigate with Google Maps, you can see text message or emails, you can take pictures or video, you can search Google for information. It’s a lot less obtrusive. You’re never going to leave the house or the office with a HoloLens on top. There is a specific place and it’s going to be in one location. I will agree that this is a giant step forward from what we’ve seen from other types of virtual reality tools such as the Oculus Rift. When I see other types of virtual reality immersion type technologies, they mostly use it to just visit worlds where they can see things or to play games and it’s purely for entertainment purposes. What I’m really intrigued with here is the ability to actually use it as a practical device to talk to people, although frankly, I was watching the video for it and they were showing a person watching a Skype call and the person giving a demonstration and my first thought to that was, was that just a one-way Skype video call? Because if you’re wearing this mask while it’s happening, is the other person watching you wearing this big set of goggles? I don’t really understand how that particular thing would happen if you’re going to use Skype. For me, watching a Skype video call within them would be pretty cool, but I’m just not sure how that would work. I think that it’s very cool. I see acceptance for this travelling along the same lines as other virtual reality tools with slightly more acceptance because it does have the potential for more practical use and applicability for either the business or for getting things done, frankly.
Dennis Kennedy: Now it’s time for those parting shots, that one tip, website or observation that you could use the second this podcast ends. Tom, take it away.
Tom Mighell: So I’m going to recommend two apps for Mac, that if you’re a Mac user, I think they’re both great apps. One of them, I’ve used forever, but it just has a Mac app, and the other one I’ve just started using. The first one is called Call Recorder for Skype. I wanted to find a very simple-to-use app that took good recordings of Skype conversations and Call Recorder is about as simple to use as it can get. It’s a simple download, it’s very unobtrusive. The minute that you start an audio-call you just press the record button, it automatically downloads it into a file on your computer and it gives it to you in a bunch of different formats if you want it. Very simple, easy to use, it worked flawlessly the first time that I tried to use it. The second app is LastPass, which I’ve been using the browser extension forever, and I’ve been using it on my phone forever. I know that for a long time, 1Password has had the monopoly on the Mac desktop and on Mac devices. And I do like 1Password a lot but I prefer LastPass and LastPass in a Mac app does everything that the other apps do. It allows you to open up sites automatically in a browser just by clicking on things in LastPass, it allows you to change passwords and manage your passwords in your secure forms and your logins and other things easily. It’s a free download unless you’re a premium subscriber, in which case, it’s $12 a year to get the mobile apps for that. But LastPass is a great app to use on both Mac and PC.
Dennis Kennedy: Well I’ve been always interested in usability that’s been enhanced by the fact that I’ve worked with people, or at least I sit by people who are in the usability area; and it’s amazing how they think through carefully how people use things and how you can make it easier and more natural for people to do things. So I had the occasion to actually hear one of the great usability experts, a guy name Luke Wroblewski, speak and talk about this idea that’s really simple but changes the experience that you have – especially on mobile devices – in such a positive way. He talked about this and he’s written about it now on his blog and we’ll have the URL in the show notes, but the blog is called Showing Passwords on Log-In Screens. We’ve always typed in passwords and there’s been all the asterisks and so you can’t tell what you’ve typed in. So if its fails, you don’t know whether you used the wrong password or you typed something in, you don’t know exactly what happened because all you see are the asterisks. So Luke says, in most cases, especially when we’re working from home, we know that nobody’s going to be looking at what we’re typing in on our screen or we know how to protect the screen in case that somebody is around us. So his notion is that it’s better to either have the choice or to default where you actually show what you’re typing in, and then give you the option to turn the asterisks on if you want to make sure that nobody sees it. And a lot of times, as you use longer and longer passwords, that typing things in – especially on your phone – is difficult, and if you type it in wrong, it’s a real pain. And it’s really amazing when you see this and they’re popping up more and more where you can elect to show the password as you type it on the mobile device. It just makes your life so much easier, so it’s an example of one of the simple little changes that the usability people come up with that really makes a difference.
Tom Mighell: I’ll tell you, the one area that it doesn’t work – and I agree with you, I like the option to be able to see it – but Microsoft I think has adopted the wrong way of doing this because on my Surface tablet and in other Windows interfaces, the way that it usually allows you to see it puts a little closed eye next to the password and you press the closed eye to open it and it allows you to see it. With Windows devices, you have to press down on the eye and hold it and when you lift your finger up, it makes the password go back to asterisks again, which I think is so completely useless. It doesn’t help at all in being able to fix the password or do anything with it. So Microsoft, take Luke’s advice and do something about your showing passwords on login screens.
So that wraps it up for this edition of the Kennedy-Mighell Report; thanks for joining us on the podcast. Information on how to get in touch with us, as well as links to all the topics we discussed today is available on our show notes blog at TKMReport.com. If you like what you hear, please subscribe to our podcast in iTunes or on the Legal Talk Network site. You can get to archives of all of our previous podcasts in both places as well. If you have a question you want answered, or a topic for an upcoming podcast, please email us at [email protected] or send us a tweet. I’m @TomMighell, and Dennis is @DennisKennedy. So until the next podcast, I’m Tom Mighell.
Dennis Kennedy: And I’m Dennis Kennedy and you’ve been listening to The Kennedy-Mighell Report, a podcast on legal technology with an internet focus. Help us out by telling a couple of your friends and colleagues about the podcast. And I’m hoping to see some of you at Legaltech New York this year, which you may be attending shortly after this podcast comes out. Look me up.
Advertiser: Thanks for listening to the Kennedy-Mighell Report. Check out Dennis and Tom’s book, The Lawyer’s Guide to Collaboration Tools and Technologies: Smart Ways to Work Together. From ABA Books or Amazon. And join us every other week for another edition of the Kennedy-Mighell Report, only on the Legal Talk Network.
Dennis Kennedy and Tom Mighell talk the latest technology to improve services, client interactions, and workflow.
Dennis Kennedy and Tom Mighell dig into the potential uses lawyers may find in low-code/no-code applications.
Gina Bianchini discusses opportunities for reinventing the legal profession through the creation of online communities.
Dennis and Tom share the content capture tools currently under consideration for their Second Brain project.
Kelly Palmer shares tactics for developing a culture of continuous learning in your law firm.
Dr. Heidi Gardner shares insights from her research on collaboration.
Dennis Kennedy and Tom Mighell discuss their steps toward organizing the “capture” element of their Second Brain project.