Five years ago, Google launched the open-source platform TensorFlow to accelerate machine learning research and empower developers to build AI applications. On the second day of the annual developer conference Google I/O, the technology company announced the latest developments in ML.
The keynote session ‘What’s New in Machine Learning’ was hosted by Kemal El Moujahid, Product Director for TensorFlow and ML at Google; Sarah Sirajuddin, Engineering Director, TensorFlow and ML at Google; and Craig Wiley, Product Director of Google Cloud AI Platforms.
During the session, the experts discussed all the technologies Google has made available for developers in creating, understanding and deploying models, the latest developments in Google Cloud Platform, enabling an end-to-end ML pipeline, and ultimately, the new releases and tools.
Here are the latest releases:
Data set exploration tool
Know Your Data is Google’s new web-based data set exploration tool that helps developers understand rich data sets and seamlessly explore them to spot potential biases or imbalances as part of their workflow.
Additions to TensorFlow Lite
First, developers can create a TensorFlow Lite model using tools such as the TensorFlow Lite Model Maker. Developers can run these models in the browser with no further conversion. Meaning, you will not need one version for mobile and a different JS one for the browser. Google is also planning to add on-demand training capabilities later this year.
Secondly, Google would add TensorFlow Lite runtime to Google Play Services. “Now, you do not need to bundle TF (TensorFlow) Lite separately in your app. Which means no additional APK size increase,” Sarah explained.
With ML Kit, developers can deploy both turnkey models and customised ones quickly and easily. Developers who use Firebase can host their models for easy deployment and updates without updating their entire app.
Under the ML Kit, Google has released four new models — pose detection, digital ink recognition, selfie segmentation, and entity extraction.
Google believes ML on microcontrollers and embedded systems have a huge transformative potential. The tech giant has partnered with Harvard University to offer a Course for ML with embedded devices, “To widen access to education around it and to make it more accessible to all developers,” Sarah said.
The course will cover:
- The fundamentals of ML and embedded devices
- Gathering data effectively for ML
- Training and deploying tiny ML models
- Optimising ML models for resource-constrained devices
- Conceiving and designing one’s tiny ML application
- Programming in TensorFlow Lite for Microcontrollers
Google is also working on a project with Harvard University and Navajo Technical University to recognise the Navajo language on microcontrollers using ML.
New Dev Board Micro
Google’s Coral Project, designed to help developers at both development and deployment time, will be releasing a new Dev Board Micro later this year.
Google has launched a new on-device ML site to guide developers in their choices– from turnkey to custom models and cross-platform mobile to in-browser.
On-device ML contains information, links and end-to-end learning paths to take developers from zero to implementing custom models and apps, used for various scenarios, including comments spam detection, product image search, and more. The newly launched learning pathways are designed to teach developers how to build apps using common ML scenarios end-to-end (with videos, code and code labs).
People+ AI Guidebook 2.0
Google recommends a people-centric approach to AI. To help with this, it announced the People+ AI Guidebook 2.0. The new update is designed to put the guidance into practice with new resources, including code and design patterns, to help developers focus on their use as they develop their AI solution.
Google has launched TensorFlow Forum to promote dialogue, share ideas, inspiration and best practices.