Ever since Android first came into existence in 2008, it has become the world’s biggest mobile platform in terms of popularity and number of users. Over the years, Android developers have built advances in machine learning, features like on-device speech recognition, real-time video interactiveness, and real-time enhancements when taking a photo/selfie.
In addition, image recognition with machine learning can enable users to point their smartphone camera at text and have it live-translated into 88 different languages with the help of Google Translate. Android users can even point your camera at a beautiful flower, use Google Lens to identify what type of flower that is, and then set a reminder to order a bouquet for someone. Google Lens is able to use computer vision models to expand and speed up web search and mobile experience.
Google has worked extensively to make Android devices suitable for small ML models also. Mobile ML projects carry unique challenges of converting and deploying models to the right devices at the right time. These focus usually is more on using pre-trained models or retraining existing ones. But, the company has ensured there is also scope for customised models as well.
TensorFlow Lite is a popular open-source deep learning framework, which can be used on-device mobile inference. Following Apple’s announcement of Core ML, Google released TensorFlow Lite, the next evolution of TensorFlow Mobile, which promised better performance by being able to leverage hardware acceleration on devices that support it.
This framework from Google can run machine learning models on Android and iOS devices. Today, TensorFlow Lite is used on billions of devices across the world, and its set of tools are being used for all types of neural network-related apps, from image detection to speech recognition.
TensorFlow Lite enables the bulk of ML processing to take place on the device, utilising less intensive models, which do not have to rely on a server or data centre. Such models run faster, give potential privacy enhancement, consume less power, and in some cases, do not need an internet connection as well. On Android such as the latest versions of the device, TensorFlow Lite leverages specialist mobile accelerators through the Neural Network API, providing better performance while minimising power usage that is expected when training datasets.
ML Kit is Google’s solution for integrating customised machine learning into mobile applications and was rolled out in 2018 at its I/O conference. ML Kit gives Google’s on-device machine learning innovation to mobile app developers, to help them make customised experiences into their applications, which includes tools like language translation, text recognition, object detection, etc. ML Kit helps identify, analyse, understand visual and text data in real-time, and in a user privacy-focused manner, as data remains on the device. According to Google’s Director of Product Management, “It makes machine learning much more approachable.”
Developers can use Vision APIs under ML Kit for Video and image analysis APIs to label images and detect barcodes, text, faces, and objects. This can be used for various advanced application development and ML integration such as barcode scanning, face detection, image labelling, object detection and tracking. Also, there are natural language processing APIs to identify and translate between 58 languages and provide reply suggestions. As a result, today, more than 25,000 applications on Android and iOS make use of ML Kit’s features.
The original version of ML Kit was tightly integrated with Firebase, and for more flexibility, while implementing in apps, Google recently announced that it was making all the on-device APIs available in a new standalone ML Kit SDK that no longer requires a Firebase project. This gives developers access to the unique benefits that on-device versus what cloud ML offers.
According to Google, if ML Kit doesn’t completely address developers needs, developers can look for alternative models, and how to train and use custom ML models in your Android app. “If the turnkey ML solutions don’t suit your needs, TensorFlow Hub should be your first port of call. It is a repository of ML models from Google and the wider research community. The models on the site are ready for use in the cloud, in a web browser or in an app on-device,” according to Google
What Else Is New?
In addition to key vision models such as MobileNet and EfficientNet, the repository also boast models powered by the latest research such as wine classification for 400,000 popular wines, US supermarket product classification for 100,000 products, landmark recognition on a per-continent basis, CropNet model by Brain Accra for recognising cassava leaf disease, plant disease recognition from AgriPredict that detects disease in maize and tomato plant.
Besides, with the large repository of base models, developers can also train their own models. Developer-friendly tools are available for many common use cases. In addition to Firebase’s AutoML Vision Edge, the TensorFlow team launched TensorFlow Lite Model Maker earlier this year to give developers more choices over the base model that support more use cases. TensorFlow Lite Model Maker currently supports two common ML tasks, which is text and image classification.
The TensorFlow Lite Model Maker can run on your own developer machine or on Google Colab online machine learning notebooks. Going forward, the Android team plans to improve the existing offerings and to add new use cases.
Once developers have selected a model or trained their model, there are new easy-to-use tools to help integrate them into their Android app without having to convert everything into ByteArrays, with ML Model binding with Android Studio 4.1. This lets developers import any TFLite model, read the input/output signature of the model, and use it with just a few lines of code that calls the open-source TensorFlow Lite Android Support Library.