Recently, the developers at Google detailed the methods and ways they have been using artificial intelligence and machine learning in order to improve its search experience. The announcements were made during the Search On 2020 event, where the tech giant unveiled several enhancements in AI that will help to get search results in the coming years.
In 2018, the tech giant introduced the neural network-based technique for natural language processing (NLP) pre-training called Bidirectional Encoder Representations from Transformers or simply, BERT. Last year, the company introduced how BERT language understanding systems are helping to deliver more relevant results in Google Search.
Since then, there have been enhancements in a lot of areas including the language understanding capabilities of the engine, search queries and more. BERT has now been used in almost every query in English that helps in getting higher quality results.
Over the last two decades, the tech giant has achieved tremendous progress in its search engine. Prabhakar Raghavan, Google head of Search & Assistant stated in a blog post that four key elements form the foundation for all the work to improve search and answer trillions of queries every year. They are mentioned below –
- Understanding all the world’s information
- The highest-quality information
- World-class privacy and security
- Open access for everyone
Some of the key enhancements with AI and ML that were announced during the event are mentioned below:
Hum to Search
The news of this new feature is making rounds all over the social media platforms. The “hum to search” feature allows the users to hum, whistle or sing a melody to Google in order to solve their earworm. After humming any tune into the Google Search widget, the machine learning algorithm helps in identifying potential song matches.
This uses the music recognition technology that includes deep neural networks with low-power recognition of music to mobile devices. When a user hums a melody into Search, the machine learning models transform the audio into a number-based sequence representing the melody of a song.
The machine learning models are trained to identify songs based on a variety of sources, including humans singing, whistling or humming, as well as studio recordings. According to the developers, the algorithms also extract other details, including accompanying instruments and the voice’s timbre and tone. Currently, the feature is available in English on iOS, and in more than 20 languages on Android.
Access to High-Quality Information During COVID-19
The developers of the company have made sure that the users get each and every necessary information about the pandemic. In order to keep the users updated with the latest information, Google has announced new improvements to arm the users with the information they require to navigate to places and get things done.
Features like Live busyness updates at Google Map show users how busy a place is right now to maintain the social distance. Another feature called Live View that helps in getting essential information about businesses. The tech giant has also added COVID-19 safety information on business profiles across Google Search and Maps.
This is helping users to know if they need to wear a mask or to make a reservation. The features use the Duplex conversational technology, location history data, among others. They also use the differential privacy technique to ensure that business data remains anonymous.
Understanding Moments in Videos
The developers at Google have introduced a new AI-driven approach to help users in understanding the deep semantics of a video and automatically identify key moments. The feature will help the users to tag moments in the video, which can be further navigated like chapters in a book.
Raghavan stated in the blog post, “We’ve started testing this technology this year, and by the end of 2020, we expect that 10% of searches on Google will use this new technology.”
Deepening Understanding Through Data
Google has been working on the Data Commons Project for two years now. To provide the best search result and understand the large data, the developers at the tech giant leverages natural language processing techniques.
NLP helps in mapping the search to one specific set of the billions of data points in Data Commons to provide the right statistics in a visual as well as easy to understand format.
Helping Journalism through Advanced Search
As part of Journalist Studio, the tech giant has introduced a new suite of tools that will help the reporters and journalists do their work more efficiently and securely. This year, the company has launched a new tool that brings the power of Google Search to journalists, known as Pinpoint.
Pinpoint helps reporters quickly sift through thousands of documents by automatically identifying and organising the most frequently mentioned people, organisations as well as locations.
Explore information in 3D Visuals
As a part of the SearchOn event, the developers have also announced new ways to use Google Lens and augmented reality (AR) while learning and shopping. Google lens can now recognise 15 billion things, which will help in identifying plants, animals, landmarks and more. The Lens can also translate more than 100 languages, such as Spanish and Arabic.
Aparna Chennapragada, VP at Google, stated in a blog post, “Another area where the camera can be helpful is shopping, especially when what you’re looking for is hard to describe in words.”
Lens uses Style Engine technology which combines the largest database of products with millions of style images. And, then, the algorithm matches the pattern to understand concepts like ruffle sleeves or vintage denim.