What I learned from Google I/O 2017

A few days ago I attended Google I/O 2017 in Mountain View and I would like to share what I learned.

Day 0 Parties

As always there were so many parties the night before I/O. I went to two parties: Women Techmaker’s dinner with over 1000 other women at GARField park near Googleplex, and the Android Study Group (ASG) mixer at LinkedIn.

The best part of Google I/O, besides the incredible learning, was hanging out with friends.

Day 1 Keynote

On the first day of I/O, Sundai Pichai opened the Google Keynote which can be summed up with this message: mobile first to AI first.

We heard a lot about AI and machine learning, from TensorFlow, Cloud TPU to Auto ML, and how Google is democratizing AI . Check out this website http://google.ai to see how Google brings the benefits of AI to everyone. There is even a link here to sign up if you are interested in trying out the Cloud TPU.

There were quite a few new product announcements on computer vision. Google lens uses machine learning to see images, and it will first ship with Google Assistant and Google photos.

Google photos added 3 new features and all leverage the power of machine learning and computer vision:

1. Suggested share — identifies who is in your photos and reminds you to share them.

2. Shared Libraries — shares libraries of photos with your family and friends

3. Photo books — helps create photo books and allows physical photo book printing.

Google home will be able to provide visual responses by connecting to our screens: phone, tablet and TVs.

Youtube — 60% watch time from mobile devices. Living room watch time is growing. There will be 360 video or live streaming in the living room.

VR & AR — there will be a standalone VR, no cables/phones/PC. Partner with HTC and Lenovo. These devices will be available later this year. Google Expedition + AR for education.

Visual Positing Service — one of the core capabilities of Google Lens. Integrate ML, maps and computer vision. GPS gets you to the door. VPS can get you to the exact location of an item at a store when shopping.

Sundai wrapped up the Google keynote with this message before talking about Google Jobs: TensorFlow + Everyone.

Watch the video of Google Keynote here to get the details of all the new announcements.


The biggest news in Android were Kotlin and the new Architecture Components. There were also significant improvements in development tools with Android Studio 3.0 canary release and talks on Android O. Watch What’s New in Android for hear about all the Android news announced at I/O.

New Programming Language Kotlin

The Android community is cheering that Google has now officially adopted Kotlin as a programming language for Android. Here are a few recent resources to get started on Kotlin:

I’m looking forward to learning more about Kotlin!

Android Architecture Components

For the first time since Android history, Google has now finally provided an official guideline on what an Android app architecture looks like, and also provided the Android Framework for the MVVM architecture.

AI & Machine Learning

Watch this short clip by our I/O Guide timothyjordan to get a tour of the AI/ML booth at I/O 2017.

Effective TensorFlow for Non-Experts gave us an intro to TensorFlow and then Francois Chollet introduced us to Keras that helps make deep learning accessible to everyone.

Josh Gordan shared with us his favorite open-source machine learning models in Open Source TensorFlow Models. He also helped us understand how Style Transfer works.

TensorFlow Frontiers also gives an intro on TensorFlow including stats on commits and how it compares with other Machine Learning frameworks such as Cafee, Torch and Theano etc. Then the talk dived into talking about TensorFlow 1.2 and Cloud TPUs and the TensorFlow Research Cloud.

Past, Present and Future of AI / Machine Learning was presented by Alphabet’s top AI experts Daphne Koller, Diane Greene, Fei-Fei Li, Fernada Viegas and Francoise Beaufays. I highly recommend you watching this talk to hear about the current use cases and the future opportunities of AI.

AI/ML on Mobile

We heard a lot about machine learning at I/O last year already. What is different this year is that:

  • AI and machine learning is taking the center stage.
  • There is a much better integration of AI/ML on mobile for developers.

The talk Getting Started with Machine Perception Using the Mobile Vision API teaches you how to get started with Mobile Vision API.

From the talk Android Meets TensorFlow you can learn about how to optimize TensorFlow on Android and how TensorFlow Lite works on Android Neural Network API.

In the past, to build the sample TensorFlow Android app, you will need to use the build tool Bazel. Now you can directly build TensorFlow on Android with JCenter by including the dependency in build.gradle. TensorFlowLite — will soon be part of the open source project. Available later this year.

Google Assistant

There were quite a few talks on Google Assistant at I/O this year. I managed to attend one of the talks and will watch the others on this topic.

Bringing the Google Assistant to any Device gave us an overview of the Google Assistant SDK. You can use the “OK Google” Hotword library to connect to a device or use gRPC API. You can join the Actions on Google Developer Challenge to win a trip to Google I/O 2018 and other prizes: g.co/actionschallenge.

This year’s swag includes a Google Home so I’m also looking forward to trying out Actions on Google.

I summarized my learnings in sketchnotes —

With 150 talks and 85 codelabs, there are still so much more to learn! I’ve included the links to the talks and code labs below for you to explore:

ML GDE (Google Developer Expert) | AI, Art & Design | margaretmz.art