Tech giant Google has always been at the forefront while developing newer technologies and making life easier. During the Google I/O 2019 event, the tech giant unveiled many interactive features such as accessibility projects designed to help people with different abilities, releasing a new OS and other interesting features.
In this article, we list down the top artificial intelligence-based upgrades on Android in 2019.
1| Android Q
In March, Google launched the first beta version of the mobile operating system called Android Q and recently, the tech giant released the official version for the same. In this version, there is a wide range of features to protect users like file-based encryption, OS controls requiring apps to request permission before accessing sensitive resources, locking down camera/mic background access, and much more. With Android Q, the OS helps users have more control over when apps can get the location. In Android Q, the developers added neural network API 1.2 including ARGMAX, ARGMIN, quantised LSTM, alongside a range of performance optimisations.
2| Live Caption
Live Caption is one of the most popular AI-based updates in Android. It an automatic captioning system which is now officially available to make digital media more accessible. It works through a combination of three on-device deep learning models: a recurrent neural network (RNN) sequence transduction model for speech recognition (RNN-T), a text-based recurrent neural network model for unspoken punctuation, and a convolutional neural network (CNN) model for sound events classification. It then integrates the signal from the three models to create a single caption track.
With just a single tap, Live Caption automatically captions videos and spoken audio on the device (except phone and video calls). All this happens in real-time and completely on-device which can also be done without using network resources such as WiFi or cell data. Currently, Live Caption is available in English on Pixel devices and will soon be available other Android phones.
3| Project Euphonia
Project Euphonia is an Automatic Speech Recognition (ASR) system which performs speech-to-text transcription. For this project, the developers used AI to improve computers’ abilities to understand diverse speech patterns, such as impaired speech. There is the personalised speech recognition for non-standard speech in Project Euphoria which improves ASR for people with amyotrophic lateral sclerosis (ALS), a disease that can adversely affect a person’s speech.
While developing the models used for training data on atypical speech, the developers explored two different neural architectures which are RNN-Transducer (RNN-T), a neural network architecture consisting of encoder and decoder networks. The other is Listen, Attend, and Spell (LAS), which is an attention-based, sequence-to-sequence model that maps sequences of acoustic properties to sequences of languages.
4| Project Diva
Project DIVA or DIVersely Assisted is an initiative which has been used to make Google Assistant more accessible with non-verbal commands. The basic goal of this feature is to create simple ways with which people down syndrome or other non-verbal issues can easily make commands to the smart assistant without using their voice. For this purpose, the developers created a box to connect assistive buttons and convert the signal coming from the button to a command sent to the Google Assistant.
5| Smart Reply
This time, Android 10 introduced the Smart Reply feature which enables instant replying to messages right from the notification. This feature uses artificial intelligence which works by predicting and suggesting what the user is going to reply in a specific message and other recommended actions.
Subscribe to our NewsletterGet the latest updates and relevant offers by sharing your email.
A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.