Learnings from TensorFlow Dev Summit 2018

Margaret Maynard-Reid
4 min readApr 1, 2018

I attended the TensorFlow Dev Summit at the Computer History Museum Mountain View on March 30, 2018. It was a day packed with incredible learnings and exciting new announcements on TensorFlow, an open source machine learning framework by Google. I’m one of the few hundreds who were lucky to attend in person so I’m sharing with you the new announcements, some notes I took and my overall experience at the Summit.

At the Summit I ran into Kubra Zengin and Martin Omander who I have been working closely with, as part of the GDG (Google Developer Groups) community.

During session break I got a chance to hang out with Lawrence Moroney, the host of famous YouTube channel “Coffee with A Googler”, and Trish Whetzel who is the organizer of GDG Silicon Valley.

At the end of the Summit, a few of us women attendees took a group photo together:

In the evening there was an after party with food, drinks and music. It was beautifully decorated with an orange color matching the TensorFlow logo. We had a chance to chat with other attendees, and members of the TensorFlow team.

TensorFlow Dev Summit After Party

Overview

There were quite a few announcements and new tools at the Summit and I’m listing some of them here:

  • TensorFlow 1.7 (released the day before the Summit)
  • TensorFlow.js: support for JavaScript.
  • TensorFlow for SWIFT: support for SWIFT.
  • TensorBoard debugger GUI plugin
  • An official TensorFlow blog on Medium: blog.tensorflow.org
  • An official TensorFlow YouTube channel: youtube.com/tensorflow
  • TensorFlow Hub: a shared ML library with reusable ML code called modules.

I summarized the announcements and learnings from the Summit in sketchnotes:

Next I will go over a few talks with the notes I took at the Summit.

Eager Execution

Documentation | Talk on YouTube

Alex Passo gave a talk on eager execution and how it came about. With eager execution, you no longer need to create a session and call session.run() in order to run an operator. This allows developer to iterate more quickly and debug easier.

To get started first make sure to install TensorFlow or update to TensorFlow 1.7 in order to get the latest updates of Eager Execution

# Use the following at command line to upgrade to TF 1.7
$ pip install --upgrade tensorflow

With the recent release TensorFlow 1.7, eager execution has moved out of contrib, so you simply use tf.enable_eager_execution() to enable eager execution.

import tensorflow as tf
tf.enable_eager_execution()

You can try the live demo with an interactive notebook here goo.gl/eRpP8j in which you will see more features of eager execution.

The Practitioner’s Guide with TensorFlow High Level APIs

Mustafa Ispir gave a talk on TensorFlow Estimators — a library that lets you focus on your experiment. It’s used by thousands of engineers to power hundreds of products within Google. Mustafa gave an example using estimator DNNClassifier to make personalized hike recommendations with just a few lines of code. He also went over experimenting with pre-made estimators Wide-n-Deep and Gradient Boosted Trees. At the end he mentioned that tf.contrib.learn is being deprecated.

Debugging TensorFlow with TensorBoard plugins

TensorBoard debugger now has a GUI plugin. We got to see a demo of how it works with a demo by Shanqing Cai. Watch the video here for the demo https://youtu.be/XcHWLsVmHvk.

You just need one line to launch the plugin -

$ tensorboard --logdir /tmp/logdir --debugger_port 7000

TensorFlow Hub

Jeremiah Harmsen & Andrew Gasparovic gave us an introduction to TensorFlow Hub. They talked about how ML tools are lagging behind software engineering tools which allow us to easily share code. TensorFlow Hub is the solution to solve the problem and will allow us to build and share reusable packages of ML code called a module.

A module is a re-usable package. A model is too big to be shared because you need the exact input and output for a model. Modules are smaller piece, like a library. A module is

  • a saved model
  • easy to insatiate
  • retrainable
  • little training data is required in order to use a module

They gave an example of using NASNet module for identifying rabbit breed, then another example of restaurant review using sentence embeddings. Currently there are image modules, text modules and Progressive GANs. Upcoming: audio and video modules.

Read Josh Gordan’s blog post: Introducing TensorFlow Hub: A Library for Reusable Machine Learning Modules in TensorFlow, to learn more about TensorFlow Hub. His post gives examples on using modules for image retraining, text classification, as well as the new Universal Sentence Encoder.

TensorFlow Lite

As an Android Developer, I take a special interest in TensorFlow Lite, which is a lighter version of TensorFlow for on device ML (currently for inference only). It was announced at I/O last year, developer preview released Nov of 2017.

The presenter went over the background of TensorFlow Lite, how to use it and gave a demo of TFLite on Raspberry Pi. At the end of the TensorFlow talk I heard that the roadmap includes “on-device training” and I’m looking forward to its release.

Lawrence Moroney posted a blog Using TensorFlow Lite on Android which explains what is TensorFlow Lite and how to use it.

My notes in this post only cover a few talk sessions. As you see in the summary of my sketchnotes earlier, there are many other excellent talks and new TensorFlow features, check out the references section below for all talks and topics at the TensorFlow Dev Summit.

References

--

--

Margaret Maynard-Reid

ML GDE (Google Developer Expert) | AI, Art & Design | 3D Fashion Designer