MITB Banner

How Google Is Using AI & ML To Improve Search Experience

Share

Recently, the developers at Google detailed the methods and ways they have been using artificial intelligence and machine learning in order to improve its search experience. The announcements were made during the Search On 2020 event, where the tech giant unveiled several enhancements in AI that will help to get search results in the coming years. 

In 2018, the tech giant introduced the neural network-based technique for natural language processing (NLP) pre-training called Bidirectional Encoder Representations from Transformers or simply, BERT. Last year, the company introduced how BERT language understanding systems are helping to deliver more relevant results in Google Search

Since then, there have been enhancements in a lot of areas including the language understanding capabilities of the engine, search queries and more. BERT has now been used in almost every query in English that helps in getting higher quality results.

Over the last two decades, the tech giant has achieved tremendous progress in its search engine. Prabhakar Raghavan, Google head of Search & Assistant stated in a blog post that four key elements form the foundation for all the work to improve search and answer trillions of queries every year. They are mentioned below –

  • Understanding all the world’s information
  • The highest-quality information
  • World-class privacy and security
  • Open access for everyone

Some of the key enhancements with AI and ML that were announced during the event are mentioned below:

The news of this new feature is making rounds all over the social media platforms. The “hum to search” feature allows the users to hum, whistle or sing a melody to Google in order to solve their earworm. After humming any tune into the Google Search widget, the machine learning algorithm helps in identifying potential song matches. 

This uses the music recognition technology that includes deep neural networks with low-power recognition of music to mobile devices. When a user hums a melody into Search, the machine learning models transform the audio into a number-based sequence representing the melody of a song. 

The machine learning models are trained to identify songs based on a variety of sources, including humans singing, whistling or humming, as well as studio recordings. According to the developers, the algorithms also extract other details, including accompanying instruments and the voice’s timbre and tone. Currently, the feature is available in English on iOS, and in more than 20 languages on Android.

Access to High-Quality Information During COVID-19

The developers of the company have made sure that the users get each and every necessary information about the pandemic. In order to keep the users updated with the latest information, Google has announced new improvements to arm the users with the information they require to navigate to places and get things done.

Features like Live busyness updates at Google Map show users how busy a place is right now to maintain the social distance. Another feature called Live View that helps in getting essential information about businesses. The tech giant has also added COVID-19 safety information on business profiles across Google Search and Maps. 

This is helping users to know if they need to wear a mask or to make a reservation. The features use the Duplex conversational technology, location history data, among others. They also use the differential privacy technique to ensure that business data remains anonymous.

Understanding Moments in Videos

The developers at Google have introduced a new AI-driven approach to help users in understanding the deep semantics of a video and automatically identify key moments. The feature will help the users to tag moments in the video, which can be further navigated like chapters in a book. 

Raghavan stated in the blog post, “We’ve started testing this technology this year, and by the end of 2020, we expect that 10% of searches on Google will use this new technology.”

Deepening Understanding Through Data

Google has been working on the Data Commons Project for two years now. To provide the best search result and understand the large data, the developers at the tech giant leverages natural language processing techniques.

NLP helps in mapping the search to one specific set of the billions of data points in Data Commons to provide the right statistics in a visual as well as easy to understand format.  

As part of Journalist Studio, the tech giant has introduced a new suite of tools that will help the reporters and journalists do their work more efficiently and securely. This year, the company has launched a new tool that brings the power of Google Search to journalists, known as Pinpoint.

Pinpoint helps reporters quickly sift through thousands of documents by automatically identifying and organising the most frequently mentioned people, organisations as well as locations. 

Explore information in 3D Visuals

As a part of the SearchOn event, the developers have also announced new ways to use Google Lens and augmented reality (AR) while learning and shopping. Google lens can now recognise 15 billion things, which will help in identifying plants, animals, landmarks and more. The Lens can also translate more than 100 languages, such as Spanish and Arabic.

Aparna Chennapragada, VP at Google, stated in a blog post, “Another area where the camera can be helpful is shopping, especially when what you’re looking for is hard to describe in words.” 

Lens uses Style Engine technology which combines the largest database of products with millions of style images. And, then, the algorithm matches the pattern to understand concepts like ruffle sleeves or vintage denim.

Watch the event here.

Share
Picture of Ambika Choudhury

Ambika Choudhury

A Technical Journalist who loves writing about Machine Learning and Artificial Intelligence. A lover of music, writing and learning something out of the box.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.