Mitch had the opportunity to plant a few seeds at this year’s Clio Cloud Conference 2017 in New Orleans. Here’s the video of his presentation.
In the past we’ve written articles about using Google Glass to pick a jury and how wearable mobile technology will some day help us be better lawyers. Technology is changing everything and when you add artificial intelligence (“AI”) and telepresence robots into the mix, things really start getting interesting.
For example, in the not too distant future, we envision a free or inexpensive app providing an easy to use voice link connected to a searchable artificial intelligence (AI) database loaded with every single state and federal case, statute, opinion and legislative interpretation ever written and used and discussed in a court of law. Adding context to this database will be fact patters of all actual reported cases and statistically generated hypothetical cases . All of this will then be supplemented with economic, social and political data allowing for additional context and tread analysis.
Users of this app would have access to legal solutions and expected outcomes or resolutions unlike ever before. Legal information and analysis would be pulled from millions of resources and then shared in proper context based upon current state and federal law, local and national politics, economics, society and social trends. Results to legal questions or needs would be instant. Advice and recommendations would be premised on real time and present day needs. This cloud based service will fundamentally change the practice of law.
Think this is a bit farfetched? Think again. Remember IBM’s supercomputer Watson? You know, the computer that beat humans on Jeopardy. Well, that computing power is now available to consumers and businesses. IBM shared Watson via the cloud and the development platform and API is available to almost anyone. Here’s how the Watson cloud service works:
We believe the same thing will eventually happen with law. AI will monitor all user (personal or company of any size) business transactions, calls, emails and other communications. Image based items will be recognized using technology premised upon the algorithm called Deep Learning developed by University of Toronto’s Geoffrey Hinton. In essence, all real world interactions will be “observed” and placed into the AI database. The end result is that before a human being or company is aware of a legal issue or potential for legal matter, AI will alert the user to possible issues and solutions.
If that isn’t enough to knock your socks off, in the future lawyers, clients, witnesses, experts and even judges will be using telepresence robots to assist with managing and appearing in litigation and trials around the world.
The fact of the matter is that robotics is the fastest growing industry in the world (as report by Littler Workplace Policy Institute) and telepresence technology using robots will save everyone associated with a legal matter money and time. Double Robotics and several other companies already have the technology (see the video below) and with exponential improvement, we see this being a reality in 10 years.
Imagine that for your next court hearing, you log into your app and using your keyboard, webcam and mic you activate a robot at the courthouse that’s been docked in its charging station. You click and digitally lease the robot to travel to your courtroom to appear in law and motion. Your judge is doing the same thing from the bench via her own court issued “judicial” telepresence robot. This technology has unlimited uses and will allow all participants to appear “live” anywhere in the world.
Want to see this telepresence technology in use today? Here you go
So what do you think? Do you agree with what we believe the future has in store for all of us? Do you see this technology being used in your business or industry?
Talking artificial intelligence, technology and the law with Peter H. Diamandis (founder and chairman of the X PRIZE Foundation) and Gabby Stern (Deputy Managing Editor for the Wall Street Journal Digital Network). Turn up the volume and click the picture to watch the short 2 minute video!
Peter H. Diamandis is an American engineer, physician, and entrepreneur best known for being the founder and chairman of the X PRIZE Foundation, the co-founder and chairman of Singularity University. Gabby Stern is the deputy managing editor for The Wall Street Journal Digital Network. Mitch Jackson is a Senior Partner and trial lawyer at the firm of Jackson & Wilson, Inc.
How do you think technology and artificial intelligence will change the practice of law? How about other professional services? Please share your thoughts and comments below. Thanks!
It’s Friday afternoon and my Google Glass has just been delivered. I hope I’m not disappointed.
Glass is simple to setup. No instruction manual needed. Simply download the Glass app, enter my wifi information and power Glass up. Instantly connected to the internet. Next, I connect Glass to my smartphone via Bluetooth.
That was too easy.
Following the instructions on a little card (there is no instruction manual), I log into my special Glass portal using my laptop. You can also do this from the Glass app downloaded to your phone. Using my Google account, I add several select contacts to Glass. By selecting a couple of boxes, I also activate and connect Twitter, Facebook, Google+, The New York Times and a few other apps.
At this point Glass allows me to make and receive phone calls, email, and text messages. It also allows me to interact on social and even search Google and browse the web (this is a new feature). I call my wife in the next room. She smiles and rolls her eyes.
I spend the rest of the afternoon and evening watching a few short videos provided by Glass to learn the controls. After my first 15 minutes or so I have the “swipe” and “tap” commands down and have become use to the floating screen.
For all you tech guys and gals out there, the screen is a high resolution display and the equivalent of looking at a 25 inch high definition screen from about eight feet away. The camera is 5 MP and videos are shot at 720p. The right side speaker is a bone conduction transducer and works great. This new generation of Glass comes with an optional right side ear speaker you can plug into the micro USB port (I haven’t tried or needed to use this yet). Glass connects via Wifi – 802.11b/g and Bluetooth and has 12 GB of usable memory (16 GB flash total). It syncs with your Google + Cloud storage auto backup.
It’s now Saturday morning and time to head out the door to take my son to club soccer. I touch the side of Glass which turns on the main screen. I say “OK Glass” and “Directions to Starbucks”. I know which Starbucks I like to stop at on my way to the field but I was curious to see how well Glass gave directions (click the picture to the right to see some of the pics. Click here to watch a short soccer video)
Well, the directions instantly popped up and using voice commands, I had Glass show me a map (Google Maps/Directions) and then read off directions. Perfect.
My Glass came with sunshades and so used them as sunglasses as we drove over to the field (we’ll save the discussion on driving with Glass for another post). Upon arrival my friends all wanted to give them a try and so I changed over to “Guest mode” and let them all give Glass a test drive without accessing my personal information. Without exception, everyone smiled and gave Glass two thumbs up.
During the game I used voice commands to take pictures and shoot vides. I also used the small button on Glass to manually take pictures. As with most smart technologies, there’s more than one way to accomplish a task (tap, voice command, raise your head 30 degrees up…) and so I wanted to do some testing.
We lost. That wasn’t fun. But what was pretty cool is walking back into the house and while watching the second half of the USC football game, noticing that Glass had automatically synced using my home wifi to my Google + auto backup account. I didn’t need to do a thing. While watching the Trojans dominate the game and using my laptop, I logged into Google + and shared a couple of pics and one video on my social platforms. I could have done this directly from Glass but I first wanted to see the quality of image before sharing with the world. I’m happy to report all were pretty good. I spent the rest of the afternoon playing around with Glass and responding to friends emails and text using the Glass voice commands.
Sunday my son, nephew and father-in-law and I all did a guy’s trip up to Arcadia. For my birthday last August, my family arranged for me drive a Lamborghini around an autocross course and so it was finally time to put the pedal to the metal. Of course, Glass would be doing the laps with me.
Driving the Lamborghini around the autocross course was a kick in the pants and Glass worked perfectly. The video image was great and the audio picked up the Lambo’s deep sounding engine as I accelerated down the first straightaway.
Last night I was playing around with Glass and trying to figure out how to have notes or an outline appear. For example, maybe notes to use during an interview or an outline to use during a deposition or jury selection.
What I ended up doing is typing or copying an outline into Evernote using my laptop or desktop and then syncing with Glass. There’s a new special button in the Glass Evernote app that lets you do this. Almost instantly the outline appeared as a “Card” in Glass that I could easily read. I then copied part of a bio from Wikipedia of someone I’ll be meeting later this week and shared it via Evernote to my Glass. I’ll be referring to it just before my meeting. I think I’ll be using this function quite often.
A couple of things I’ve noticed is that because there is no physical camera between you and a subject, people seem to be more comfortable during photos or videos. They’re looking right at your eyes and not at the camera you’re holding slightly to your left or right. It may not sound like a big deal but the experience is different than when using a camera.
Another function I’m using a great deal is the voice record to take, save and share notes. Very handy indeed.
Upcoming Jury Trial
I may be picking a jury and starting trial in 2 weeks using Glass. The judge is OK with me using Glass in court so I’ll share an update if we actually find a courtroom and go forward with trial. I’ll be using the outline functions I mentioned above and, recording my voir dire for later review and use.
Well, that’s a general overview of my first 48 hours of Glass. I hope I was able to give you a taste of what Glass can do and look forward to sharing more updates as I learn more about this new wearable technology.
Just in case you’re wondering what I think about Glass, I’m giving it two thumbs up and a 12 out of 10. It’s pretty damn cool.
By the way, if you want to take a look at some of the pics and videos I mentioned, I’ve shared them on my Google + page here- http://plus.google.com/+mitchjackson
[note- this article appeared in “The Droid Lawyer” on Monday, November 11, 2013]
Several related articles…
You stand up, thank the court, and walk over to the lectern. On your way, you say the command, “jury on” and your Google Glass comes to life. Instantly in the top section of your screen, a diagram with each juror’s name is displayed. On the bottom half is your voir dire outline. Only you can see what’s being shown.
While you make direct eye contact with your potential jurors, the video function of Google Glass goes live, and your entire voir dire is captured for later review and analysis. At the same time, the live password protected video stream is sent via Wi-Fi back to your office for young associates to watch and learn how to pick a jury. At a separate location on the other side of the country, jury consultants are also watching and listening to each question and answer.
As you look at juror number 1 sitting in front of you to your left, you start the dialog by asking, Ms. Jones, “Is anyone in your family a doctor?”
Ms. Jones’ profile pops up on the small screen above your eye. In this case, you’ve activated a third-party app that uses facial recognition to identify people in your vision. It connects you to a unique trial lawyer database that depending on your subscription, accesses a system similar to Lexis or Westlaw.
Using your touchpad placed on top of the lectern, you select “social” to switch from the current database to “Glass Social Summary”. An easy to read display is shown providing Ms. Jones latest social media and blog posts.
As you ask your questions, you are immediately made aware of the fact that this juror’s cousin is a cardiologist at UCLA Medical Center. When Ms. Jones answers your original question with a “no”, you then follow up with “how about extended family members?” Ms. Jones smiles and happily reports that her cousin is a doctor at UCLA. The questions and dialog continue.
A text notice pops up on your private Google Glass screen while you’re talking to Ms. Jones. The jury consultant who is also watching the live video feed doesn’t like what she’s seeing or listening. She sends you the term “bump” indicating you’ll want to get eventually rid of this juror with a challenge.
Rather than looking down and tapping your touchpad, you say “switch” and your Google Glass accesses the new court authorized jury person database (CAJPD). As with everything else at the courthouse, the paper juror information lists are a thing of the past. Today, counsel has limited access to CAJPD. Home, work, and prior jury service details instantly fill your confidential screen. Because of the information made available to you through this technology, you spend less time than you normally would with this particular juror and move on to the next.
During the process, your designated legal associate back at the office shares thoughts, feedback and even a few questions via your Google Glass screen. You use several of the suggestions to complete your voir dire.
Before you sit down, you give the “final step” command. Google Glass runs a final program that automatically initiates an almost instant “accuracy” search for each person you questioned during jury selection. During your voir dire, everything that’s been discussed has been automatically recorded and indexed by the related Google Glass “index” service. In this case, you’re looking for inaccuracies between what’s been represented to you by a prospective juror and what your databases show.
Several blue and green hits come back. A third hit is marked bright red. Earlier Mr. Green, juror #11, indicated he had never been in trouble with the law. The red mark tells you his memory is, shall we say, a bit lacking today. When asked by the judge at the beginning of voir dire, this gentlemen neglected to acknowledge any prior criminal convictions.
Your automatic Google Glass “index” conflict search reveals that in truth, Mr. Green has been arrested and convicted several times for driving under the influence. Once in California and once in Nevada. Another “bump” message pops up on your Google Glass screen from your consultant. You think to yourself, “Really. Didn’t think I’d figure that one out on my own did you?”
You eventually sit back down at the counsel table with a much more thorough understanding of exactly who your potential jurors are. Back at the office, your new young associates now have a better idea about what real jury selection looks like. Your jury consultant on the other side of the United States also is in the process of sending “Glass Updates” (similar to email or text) to your Google Glass screen to help you make some final decisions after opposing counsel is finished with his voir dire.
You did all this while maintaining eye contact with your jurors and never once looking down at your notes. Because of what has become a standard pre-trial motion in limine to exclude the mention that either counsel is using Google Glass, your potential jurors were not aware that counsel was taking advantage of this technology.
The entire jury selection process took less time than usual but provided a more meaningful dialog. All counsel, parties, and the court now have a much better understanding of who is sitting in the jury box. And that’s a good thing. After all, isn’t that what jury selection is really all about?
Jon Mitchell “Mitch” Jackson enjoys combining law, technology and social media to hack and improve our legal system. He has been a trial lawyer for 28 years and was a 2013 California Litigation Lawyer of the Year (CLAY Award) and 2009 Orange County Trial Lawyer of the Year. When he’s no trying cases, Mitch uses social media and technology to help good attorneys become great trial lawyers and to show everyone (not just lawyers) how to communicate better. His law firm website is JacksonandWilson.com and his communication tips blog is MitchJackson.com Outside of law and the courtroom, Mitch enjoys interviewing people from around the world who are disrupting industries and influencing change at Human.Social
Mitch can’t wait to use Google Glass during trial. Also please note that several of the tools and databases referenced in the post have not yet been developed. But they will :-)
Tweets about this post…