Google I/O 2018 Keynote Reaction

See some of the major announcements from the first day of Google I/O 2018, and a detailed summary of the three keynotes at the beginning of the conference, in case you missed them. By Joe Howard.

Leave a rating/review
Save for later
Share

Google I/O 2018 began this week at the Shoreline Amphitheater in Mountain View, California. The I/O conference is Google’s annual opportunity to set a direction for the developer community as well as share with us the technologies and development tools they’ve been working on in the past year. The conference features presentations on Google products and services such as Google Assistant, Google apps like Google Maps and Google News, Chrome and ChromeOS, Augmented Reality, and of course, lots of Android. :]

The conference starts each year with two keynote presentations, the first a feature-focused presentation led by company CEO Sundar Pichai, and the second a developer-focused keynote. One of the first sessions after the two keynotes is What’s New in Android, often called “the Android Keynote”.

  • The opening keynote focused primarily on Google’s Artificial Intelligence (AI) and Machine Learning (ML) advancements, and had recurring themes of responsibility and saving time. The Google Assistant was one of the main technologies discussed, and another recurring theme was using the Assistant to improve your Digital Wellbeing.
  • The Developer Keynote started with a review of new Android features such as App Bundles and Android Jetpack. It then moved on to developer-oriented discussions of Google Assistant, Web apps and running Linux on ChromeOS, an expansion of Material Design called Material Theming, and new Firebase and AR advancements.
  • The What’s New in Android session gave a brief introduction to each of the topics that were being announced or covered at the conference for Android, and along the way pointed you to the sessions you need to see to learn more.

The most exciting announcements from the keynotes were:

  • Google Duplex: Google demoed the Google Assistant literally making a phone call for you. Google said that they’re “still working” to perfect this capability, but the the sample calls they played were jaw dropping in their naturalness and possibilities. Google is planning on simple use cases in the near future. A use case I could imagine would be having the Assistant call a number to talk through an automated system and stay on hold for you, and then notify you when the person on the other end is ready while telling them you’ll be right back.
  • Computer Vision and Google Lens: A pretty sweet AR demo in Google Maps was shown. The demo overlayed digital content on the real world over your camera feed from within the Maps app, while still showing you directions at the bottom of the screen, making it much easier to find your way in unknown places.
  • Android Jetpack: The Jetpack incorporates a number of Google libraries for Android into one package, including the Support Library and Android Architecture Components. Having them all under one name should simplify discoverability of the features and encourage more developers to use them in their apps.
  • MLKit: MLKit is a library that is Firebase-hosted and makes it easier to incorporate Google’s advanced ML into your apps, including text recognition and image labeling. There was a pretty sweet demo of grabbing the name of an item off a menu, which you could then search for a description of. And its available for both iOS and Android. MLKit, CoreML, ARCore, ARKit: hey what’s in a name? :]
  • App Actions and Slices: These will increase engagement with your app by helping you embed pieces of the app into other parts of Android like Search and Google Assistant results. The options go far beyond a simple icon for your app on the system share sheet.
  • ARCore and Sceneform: The original ARCore API required either using a framework like Unity or working with lower level OpenGL code. Sceneform promises to make it easier to code AR interactions into your apps.
  • New Voices for Google Assistant: ML training has advanced to the point that less work is required to incorporate new voices, and Google’s working with John Legend to create a voice for him. In the future, you may be able to use your own voice or select from popular celebrity voices. Would love to have a Google Assistant voice for James Earl Jones! :]

The rest of this post summarizes the three keynotes, in case you may not have had a chance or had time to watch them. At the bottom of the post are links to the actual keynote videos on the Google Developers YouTube channel, and I encourage you to watch them for yourself. And then also dive into the session videos on YouTube, once they’re available.

Opening Keynote

The keynote began with a video of little multi-colored cube creatures with some type of glow inside them. Kind of like intelligent building blocks. The video ended with the banner “Make good things together”.

Google CEO Sundar Pichai then took the stage and announced that there were over 7,000 attendees and a live stream, as well as a lot to cover. He joked about a “major bug” in a key product, getting the cheese wrong in a cheese burger emoji and the foam wrong in a beer emoji. :]

He then discussed the recurring Google theme of AI being an important inflection point in computing. He said that the conference would discuss the impact of AI advances, and that these advances would have to be navigated “carefully and deliberately”.

AI

The AI portion of the keynote started by reviewing some key fields in which Google has made advancements:

  • In healthcare, not only can retina images be used to diagnose diabetic retinopathy in developing countries, but the same eye images can also non-invasively predict cardiovascular risk. And AI can now predict medical events like chance of readmission for a patient. The possibilities for AI in the healthcare world seem to be just scratching the surface of using big data to improve the medical industry.
  • Sundar showed two impressive demos of using AI to improve accessibility. In the first, those with hearing impairments can be helped in situations like people talking over each other on closed-captioning, as AI can now disambiguate voices. The second was using AI to add new languages like morse code to the Google keyboard Gboard, helping those that require alternative languages to communicate.
  • Gmail has been redesigned with an AI-based feature called smart compose, which uses ML to start suggesting phrases and then you hit tab and keep autocompleting. The short demo in the presentation was pretty impressive, with Gmail figuring out what you next want to write as you type.
  • Google Photos was built from the ground up with AI, and over 5 billion photos are viewed by users every day. It has a new feature Suggested Actions, which are smart actions for a photo in context, things like “Share with Lauren”, “Fix brightness”, “Fix document” to a PDF, “Color pop”, and “Colorize” for black and white photos. All in all a very practical example of the combination of computer vision and AI.

Google has also been investing in scale and hardware for AI and ML, introducing TPU 3.0, with liquid cooling introduced in data centers and giant pods that achieve 100 petaflops, or 8x last year’s performance, and allow for larger and more accurate models.

These AI advancements, especially in healthcare and accessibility, clearly demonstrate Google taking the AI responsibility in a serious way. And features like those added to Gmail and Google Photos are just two simple examples of using AI to save time.

Google Assistant

Google wants the Assistant to be natural and comfortable to talk to. Using the DeepMind WaveNet technology, they’re adding 6 new voices to Google Assistant. WaveNet shortens studio time needed for voice recording and the new models still capture the richness of a voice.

Scott Huffman came on stage to discuss Assistant being on 500M devices, with 40 auto brands and 5000 device manufacturers. Soon it will be in 30 languages and 80 countries. Scott discussed needing the Assistant to be naturally conversational and visually assistive and that it needs to understand social dynamics. He introduced Continued Conversation and Multiple Actions (called coordination reduction in linguistics) as features for the voice Assistant. He also discussed family improvements, introducing Pretty Please, which helps keep kids from being rude in their requests to the Assistant. Assistant responds to positive conversation with polite reinforcement.

Lillian Rincon then came on to discuss Smart Displays. She showed watching YouTube by voice and cooking and recipes by voice on the smart display devices. They’ll also have video calling, connect to smart home devices, and give access to Google Maps. Lillian then reviewed a reimagined Assistant experience on phones, which can now have a rich and immersive response to requests. These include smart home device requests with controls like adjusting temperature, and things like “order my usual from Starbucks”. There are many partners for Food pick-up and delivery via Google Assistant. The Assistant can also be swiped up to get a visual representation of your day, including reminders, notes, and lists. And in Google Maps, you can use voice to send your ETA to a recipient.

Google Duplex

Sundar came back on stage to discuss using Google Assistant to connect users to businesses “in a good way”. He noted that 60% of small businesses in the US do not have an online booking system. He then gave a pretty amazing demo of Google Assistant making a call for you in the background for an appointment such as a haircut. On a successful call, you get a notification that the appointment was successfully scheduled. Other examples are restaurant reservations and making a doctor appointment while caring for a sick child. Incredible!

The calls don’t often go as expected, and Google is still developing the technology. They want to “handle the interaction gracefully.” One thing they will do in the coming weeks is make such calls on they’re own from Google to do things like update holiday hours for a business, which will help all customers immediately with improved information.

Digital Wellbeing

At this point the keynote introduced the idea of Digital Wellbeing, which is Google turning their attention to keeping your digital life from making too negative an impact on your physical life. The principles are:

  • Understand your habits
  • Focus on what matters
  • Switch off and wind down
  • Find balance for your family

A good example is getting a reminder on your devices to do things like taking a break from YouTube. Another is an Android P feature called Android Dashboard, which give full visibility into how you are spending your time on your device.

Google News

Trystan Upstill came on stage to announce a number of new features for the Google News platform, and the focus was on:

  • Keep up with the news you care about
  • Understanding the full story
  • Enjoy and support the news sources you love

Reinforcement learning is used throughout the News app. Newscasts in the app are kind of like a preview of a story. There’s a Full Coverage button, an invitation to learn more from multiple sources and formats. Publishers are front and center throughout the app, and there’s a Subscribe with Google feature, a collaboration with over 60 publishers that lets you subscribe to their news across platforms all through Google. Pretty cool!