I passed the TensorFlow Developer Certificate exam! Here is how I studied and prepared for it. Please note I was already familiar with TensorFlow and Deep Learning prior to taking the exam, so this post is most applicable to those with similar ML knowledge and skills level.
First of all, taking the DeepLearning.ai TensorFlow Developer Professional Certificate Specialization courses on Coursera was the most helpful preparation. Thank you Laurence Moroney and Dr. Andrew Ng!
There are four courses in this TensorFlow specialization:
These courses focus on the “how” (to use TensorFlow 2.x) while explains some of the basics of “why”. If you are not familiar with deep learning then I recommend you first take Dr. Andrew Ng’s Deep Learning Specialization with depth in math and details of how things work under the hood. If you are not familiar with TensorFlow, I recommend you study the tensorflow.org tutorials first. …
As year 2020 draws to a close, I’m looking back at my accomplishments in the past year. Highlights include open-source projects, TensorFlow Lite, computer vision, deep learning and Art.
I was featured by @TensorFlow in a Google Developers blog — Celebrating International Women’s Day with 20 tech trailblazers.
I was very honored to be featured on keras.io main page, along with 2 other ML GDEs. Got nominated by Khanh LeViet from the TensorFlow Lite team for the Google Open Source Peer Bonus Award because of my contribution to TensorFlow. …
For the past year, I studied art fundamentals, learned how to draw and paint with traditional media and digital tools. As an ML engineer, I have been exploring how to use AI for art and design, which has become my focal interest. This is the story of my journey of becoming an artist.
I created an online gallery: margaretmz.art with AI generated art and artwork I created in traditional media or with digital tools. You can browse and filter my artwork by categories and tags, and I will be adding new ones.
With the assistance of AI, everyone can create art now: with a press of button on your phone camera, or simply draw with your fingers. AI is also integrated into many of the existing software tools, for example, Adobe Photoshop and Snapchat. …
Written by Margaret Maynard-Reid, ML GDE
This is part 1 of an end-to-end TensorFlow Lite tutorial written with a team effort, on how to combine multiple ML models to create artistic effects by segmenting an image and then stylizing the image background with neural style transfer.
The tutorial is divided into four parts and feel free to follow along or skip to the part that is most interesting or relevant for you:
The Metropolitan Museum of Art has over 400,000 artworks from around the world, near half of which are open access of unrestricted commercial and noncommercial use. In this blog post, I analyze and visualize the artists and artworks metadata of its collection.
As of writing this blog post, there are 448,203 artworks by 56,390 artists.
There are 448,203 rows and 43 columns in the metadata. There are a few columns such as Artist role, prefix, name, bio, nationality, begin and end date etc. And the rest of the columns are related to the artwork itself, for example, title, culture, period, dynasty, medium, dimensions, geography info etc. …
This is an end-to-end tutorial on how to convert a TensorFlow model to TensorFlow Lite (TFLite) and deploy it to an Android app to cartoonize an image captured by the camera.
We created this end-to-end tutorial to help developers with these objectives:
This is part 3 of an end-to-end tutorial on how to convert a TF 1.x model to TensorFlow Lite (TFLite) and then deploy it to an Android for transforming an selfie image to a plausible anime. (Part 1 | Part 2 |Part 3) The tutorial is the first of a series of E2E TFLite tutorials of awesome-tflite.
In Part 2 we got a TFLite model, and now we are ready to deploy the selfie2anime.tflite model to Android! The Android code is on GitHub here.
Here are the key features of the Android…
This is part 2 of an end-to-end tutorial on how to convert a TF 1.x model to TensorFlow Lite (TFLite), and then deploy it to an Android for transforming an selfie image to a plausible anime. (Part 1 | Part 2 |Part 3) The tutorial is the first of a series of E2E TFLite tutorials of awesome-tflite.
Here is a step-by-step summary:
SavedModelout of the pre-trained U-GAT-IT model checkpoints.
This is part 1 of an end-to-end tutorial on how to convert a TF 1.x model to TensorFlow Lite (TFLite) and then deploy it to an Android app for transforming a selfie image to a plausible anime.
This tutorial is divided into three parts and feel free to follow along with the end-to-end tutorial, or skip to the part that is most interesting or relevant for you:
SavedModeland then convert it to a TFLite model. The model saving step is performed in a TensorFlow 1.14 runtime in which the model code was written although the same method can be applied to most models written in TensorFlow 1.x. The model conversion step is performed in a TensorFlow 2.x runtime in order to leverage the latest features in TFLiteConverter. …
Written by Margaret Maynard-Reid, ML GDE
In Part 1 of the Icon Classifier tutorial, I shared about how to create an icon classifier model with the TensorFlow Lite Model Maker. This is part 2 of the tutorial in which I go over how to make an Android app with just a few lines of code implementing the TensorFlow Lite model. Please follow along with the Android code on GitHub here.
The model implementation on Android takes just a few lines of code, thanks to the TensorFlow Lite metadata, the Android code generator and the new ML Model Binding feature in Android Studio Preview. …