When Sundar Pichai took centre stage at the ongoing Google I/O 2017, the annual developer conference, his mantra was to reinforce Google as an AI-first company. It was a task he performed with aplomb, cheered on by over 7000 people at the sunny San Francisco venue. The CEO’s keynote address saw several breakthrough products and updates on core tools and products used by millions of people around the world. Pichai who has been at Google for the last 13 years and oversaw the search giant shifting from web to voice and now visual reiterated the company’s core mission has been to organize the world’s information. “We approach it by applying deep computer science and technical insights to solve problems at scale,” he said to an ecstatic crowd.
Google’s success is backed up by resounding numbers. Google has crossed over two billion active devices of Android, Photos have over 500 million active users every day and users upload 1.2 billion photos to Google and Google Drive has over 800 million monthly active users. “And this all because of the growth of mobile and smartphones but computing is evolving again. The most important shift in computing is going from mobile first to AI first approach,” Pichai said at the keynote.
What is behind the computational shift is the way users interact with products. “Mobile made us reimagine every product we were working on. And we have taken into account that user interaction model had fundamentally changed with multi-touch, location, identity, payments and so on,” he said. Just the way mobile brought about a computational shift, Google is rethinking its products in an AI-first world. “We are doing it across every one of our products. Today, if you use Google Search we rank differently using machine learning,” he said.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Analytics India Magazine rounds up key highlights and announcements from AI/ML world:
ML in everyday Google Products: The biggest shift will soon be seen in everyday Google Products such as Gmail used by 1 billion users that will soon come with a Smart Reply feature. It was introduced in Allo last year where it had a great reception. The machine learning system learns to be conversational and it can reply intuitively. Street view in Google Maps automatically recognizes restaurant signs, street signs through machine learning. Google’s video calling app has Duo Video Quality that uses machine learning for low bandwidth situations.
Google Lens – going deeper in computer vision: Lens debut establishes Google’s heft in computer vision and Google’s computer vision systems are now performing image recognition task better than humans. “You take a low light picture which is noisy and we can automatically make it clearer for you. Coming very soon, you can remove the obstruction from the image and have the picture of what matters to you in front of you clearly,” announced Pichai to the cheering crowd.
Download our Mobile App
Google Lens is essentially vision based computing capability that can understand what you are looking at and help take action based on that information. It will be shipped first in Google Assistant and Photos and will be gradually introduced in other products. Citing an example how Google Lens works, Pichai explained, “For example if you want to know what flower it is, involve Google Lens from your Assistant, point it at and we can tell you what flower it is. Users can point it at a restaurant/salon and it will give you the right information in a meaningful way”.
AI first Data Centres: Keeping in the shift towards AI/ML, Google has reimagined its computational architecture and has designed data centres from the ground up, which are essentially AI first. Last year’s launch of Tensor Processing Units, custom hardware for machine learning was a step up in the ML/AI direction. “They are about 15-30 times faster and 30-80 times more power efficient than CPUs and GPUs. We use TPUs across all our products, every time you do a search, every time you speak to Google, in fact TPUs are what powered Alpha Go in its historic match against Le Sedol,” Pichai said.
Cloud TPUs – next gen machine learning hardware: Cloud TPUs are the next generation of TPUs. Last year, Google optimized TPU software for inference. Machine Learning has two components — training and building a neural net training that is computationally intensive and the inferences behind. “Each one of our machine translation models takes a training of over 3 billion words for a week on about 100 GPUs,” he said, emphasizing the computational power required in machine learning.
Cloud TPU the best in class hardware for machine learning is available on Google Compute Engine. The move makes it clear that Google is putting all its might behind the Google Cloud platform, dubbed as the best cloud for machine learning. The search engine giant is also flexing its muscle in hardware with Cloud TPUs an important advance in technical infrastructure for the AI world. “We want to provide our customers with a wide range of hardware. CPUs, GPUs and cloud TPUs this lays the foundation for significant progress,” emphasized Pichai.
The next-gen TPUs are optimized for both training and inference. Pointing at a picture of a cloud TPU Pichai noted it has four chips and each board is capable of trillion operations per second. It is designed for datacenters and can be easily stacked into one big supercomputer called TPU pods. “You can stack up to 64 in one big super computer and we named it cloud TPU is because we are bringing it through the Google Cloud platform,” he said.
Google.ai: Google is clubbing their AI efforts under Google.ai that aims to bring the benefits of AI to a larger community. It is a culmination of efforts and team across the company focused on bringing the benefits of AI to everyone. “Google.ai will focus on three areas — state of the art research, tools and infrastructure like Tensorflow and cloud TPUs and Applied AI,” noted Pichai.
Auto ML – getting neural net to design better neural net: Designing better machine learning models is exciting but it is also a painstaking effort of few engineers and scientists, Pichai revealed. “We want it to be possible for hundreds and thousands of developers to use machine learning. What better way to do this to get neural nets design better neural nets,” he shared. This approach has been dubbed as Auto ML. “It applies the reinforcement learning approach to a set of candidate neural nets,think of it this as baby neural nets and we actually use a neural net to iterate through them till we arrive at the best neural net,” said Pichai. TPUs have put this computationally hard task in the realm of possibility and through this approach, Google has already standardized tasks such as image recognition.
Bringing machine learning techniques in detecting breast cancer and DNA sequencing: the company has already published a paper on diabetic retinopathy in the Journal of American Medical Association this year. Now, its applying machine learning techniques to the complex field of pathology. “If you take an area like breast cancer diagnosis, even amongst highly trained pathologists agree on some forms of breast cancer can be as low as 48%. That is because each pathologist is reviewing the equivalent of 1000s 10 megapixel images for every case. This is a large data problem,” Pichai explained.
He revealed Google has neural nets to detect cancer spreading to adjacent lymph nodes. “It is early days but neural nets show a much higher degree of accuracy with 89% compared to previous methods of 73%. There are important caveats, we do have higher false positives but we are already giving it in the hands of pathologists who can improve diagnosis,” he said.
It is also being applied across basic sciences. In biology, neural nets are being trained to improve the accuracy of DNA sequencing. A new Google.ai tool is used to identify genetic variants more accurately than state of the art methods. “Reducing errors is an important application and we can more accurately identify whether or not patient has generic disease and can help with better diagnosis and treatment,” he said.
Machine learning can predict the quality of molecules: Machine learning is being applied to chemistry as well. Pichai’s premise is that it takes significant amount of computing resources to hunt for new molecules. “We think we can accelerate timelines by orders of magnitude, this opens up possibilities in drug discovery or material sciences. I am entirely confident one day AI will invent new molecules with predefined ways,” he said.
Auto Draw – turning doodles into art: This web based machine learning tool turns doodles into artworks and is a spinoff of QuickDraw. Cranked out from the AI Experiments lab, it makes drawing more fun and accessible across mobile, tablet and computer. “It is a simple tool which can help people draw. Just like today you type in google and we give you suggestions we can do the same when you are trying to draw,” said Pichai.