Google revolutionised the way the world uses the internet with its landmark PageRank algorithm. Today, after two decades, Google has grown into an AI powerhouse that generates state-of-the-art algorithms that touch almost every domain known to mankind.
As Google turns 21, we have compiled a list of 21 notable contributions from Google which has enriched the machine learning community across the globe.
The core open source library to help you develop and train ML models developed by the team at Google Brain.
TensorFlow’s machine learning platform has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications.
At the recently concluded TensorFlow’s developer summit, along with TensorFlow 2.0, Google also announced open sourcing of TensorFlow Lite for mobile devices and two development boards Sparkfun and Coral which are based on TensorFlow Lite for performing machine learning tasks on handheld devices like smartphones.
With TensorFlow Lite it looks to make smartphones, the next best choice to run machine learning models.
Breakthroughs With DeepMind
Google joined forces with DeepMind in 2014 and together they were responsible for major breakthroughs. Their programs have learned to diagnose eye diseases as effectively as the world’s top doctors, to save 30% of the energy used to keep data centres cool, and to predict the complex 3D shapes of proteins — which could one day transform how drugs are invented.
NLP With BERT
Bidirectional Encoder Representations from Transformers or BERT, which was open sourced last year, offers a new ground to embattle the intricacies involved in understanding the language models.
Pre-training a binarised prediction model helps understanding common NLP tasks like Question Answering or Natural language Inference.
BERT boasts of training any question answering model under 30 minutes. Given the number of steps BERT operates on, this is quite remarkable. And, this was only made possible by Google’s custom-built cloud TPUs which can accelerate dense matrix multiplications and convolutions and, minimize the time-to-accuracy while training large models.
Along with innovation in its own backyard, Google is also backing NLP startups like Armorblox. A cybersecurity startup, Armorblox aims to tackle data leaks via online attacks like email spear phishing.
AI researchers from Google, Columbia University and MIT teach a new skill to the robots- tossing things. This TossingBot can pick things of different shapes and sizes and gently throw them into a target location like a fruit into a basket or a banana peel into a trashcan.
The joints of a robot can have only so many degrees of freedom. And, to achieve skills like tossing, the synergies between grasping and throwing have to be figured out.
This integration of physics with deep learning enables faster learning in changing environments.
The researchers named this symbiosis as Residual Physics, as the name suggests, the bot, if trained using the laws of projectile ballistics, it can then be leveraged to learn an estimation of the target area or by how much it is missing the target.
The success of this experiment indicates that a machine can learn object-level semantics from its interactions with the physical world; in other words more human-like.
Google Cloud TPUs For Faster ML
Cloud TPU is designed to run cutting-edge machine learning models with AI services on Google Cloud. Its custom high-speed network offers over 100 petaflops of performance in a single pod — enough computational power to transform a business or create the next research breakthrough.
Machine learning being the hottest choice with enterprises currently, companies like Google, which outsource their technology, are leaving no stone unturned to notch up their infrastructure to meet the demands of the future. Their cloud TPU’s stand as testimony to their never-ending efforts.
Interpretability With Activation Atlases
Google in collaboration with OpenAI, came up with a new technique aimed at visualising how neural networks interact with each other and how they mature with information along with the depth of layers. Their work was published as a paper titled “Exploring Neural Networks with Activation Atlases”.
A novel attempt at improving machine learning interpretability, Activation atlases are developed to have a look at the inner workings of convolutional vision networks and derive human-interpretable overview of concepts within the hidden layers of a network.
Google Lens was introduced a couple of years ago by Google in a move to spearhead the ‘AI first’ products movement. Lens uses computer vision, machine learning and Google’s Knowledge Graphto let people turn the things they see in the real world into a visual search box.
As a smartphone camera-based tool, Google Lens has great potential for helping people who struggle with reading and other language-based challenges.
Now anyone can point a phone at text, and hear that text spoken out loud. This new feature, along with its availability through Google Go, is just one way to help more people understand the world around them.
Google has been implementing its machine learning prowess for various social good across the world.
In India especially, Google has been doing tremendous work by leveraging all their AI capabilities at disposal. Last year, Google rolled out its early flood warning services, starting with the Patna region.
Google’s approach involves incorporating multidisciplinary techniques. They range from gathering data regarding the topography of a location to using equations of fluid dynamics.
Traffic Monitoring With Google Maps
Google Maps launched live traffic delays for buses in places where there is no real-time information from local transit agencies.
The machine learning developers at Google extracted training data from sequences of bus positions over time, as received from transit agencies’ real-time feeds. These inputs are aligned with the car traffic speeds on the bus’s path during the trip.
Solving Partial Differential Equations Faster
What’s common with predicting climate change and simulating nuclear fusion reactors? These tasks are modelled on a system of very famous mathematical equations — partial differential equations (PDE).
Solving computation hungry PDEs takes a toll even on supercomputers. And, we just can’t tweak in the hardware (shrink transistors) for reducing the time consumed, a theory complemented by Moore’s law.
In the paper titled Learning Data Driven Discretizations for Partial Differential Equations, the researchers at Google have explored a potential path for how machine learning can offer continued improvements in high-performance computing, both for solving PDEs.
Digital Pathology With SMILY
A group of researchers at Google artificial intelligence department teamed up to introduce machine learning tools and methods to propel the adoption of digital pathology. With SMILY(Similar image search for histopathology), clinicians can now examine the data received as images on a computer. And, can draw insights from it, making the whole process of diagnosis relatively easy.
Google introduced TensorNetwork, an open source library for ease of computation in advanced domains like particle physics. This library was developed in collaboration with the Perimeter Institute for Theoretical Physics and X, a company that tackles the world’s hardest problems.
TensorNetwork uses TensorFlow as a backend and is optimized for GPU processing, which can enable speedups of up to 100x when compared to work on a CPU.
Parallelism With Gpipe
Researchers at Google Brain, introduced a new machine learning library called GPipe.
GPipe can be used to parse a model across different accelerators and to automatically split a mini-batch of training examples into micro-batches. Pipelining allows the accelerators to function with parallelism.
Coral Dev Board
In March this year, Google launched the Coral Dev Board, a lightweight PC outfitted with a Tensor Processing Unit (edge TPU) and a small ASIC that provides high-performance ML inferencing for low-power devices. Coral Dev Board can execute state-of-the-art mobile vision models such as MobileNet V2 at 100+ fps, in a power-efficient manner. Given the focus on computer vision use cases, Coral Dev Board makes prototyping computer vision applications easier with a Camera that connects to the Dev Board over a MIPI interface.
By leveraging the power of AutoML to customise EfficientNets for Edge TPU, developers can achieve state-of-the-art accuracy in image classification tasks and at the same time reduce the model size and computational complexity. In short, as one ML researcher puts it — AutoML + EdgeTPU + Model optimisation leads to better latency and accuracy.
The room to refine neural networks still exist as they sometimes fumble and end up using brute force for light weight tasks. To address this, the researchers at Google, have come up with MorphNet.
AI Assisted Super Resolution Photos
With Pixel, Google manages to bring technology restricted to astronomy labs into the palms of a common man by innovating on standard machine learning methods.
What makes Pixel stand out is its ability to achieve all this with a single rear camera while its contemporaries make a trade-off with design and resolution to accommodate dual lens.
Google uses RAISR (Rapid and Accurate Image Super-Resolution) for its image sharpening and contrast enhancement in their flagship model, Pixel.
While digital zooming deploys state-of-the-art algorithms to paint the picture with some meaning, federated learning pushes the boundary further by sharing the information across the devices with sophisticated anonymity.
Google Dataset Search, a new tool for finding public datasets from all of the web. Over the years we have also curated and released many new, novel datasets, including everything from millions of general annotated images or videos, to a crowd-source Bengali dataset for speech recognition to robot arm grasping datasets and more.
At Cloud Next’19, Google launched its very own AI platform. This platform aims at making the life of machine learning developers, data scientists and data engineers easy.
Colaboratory is a research tool for machine learning education and research. It’s a Jupyter notebook environment that requires no setup to use.
Fellowship Programme ML Aspirants
Google hosts an annual Google Ph.D. Fellowship Summit, where people are exposed to state-of-the-art research being pursued at Google and given the opportunity to network with Google’s researchers as well as other PhD Fellows from around the world.
Complementing this fellowship program is the Google AI Residency, a way of allowing people who want to learn to conduct deep learning research to spend a year working alongside and being mentored by researchers at Google. Now in its third year, residents are embedded in various teams across Google’s global offices, pursuing research in areas such as machine learning, perception, algorithms and optimization, language understanding, healthcare and much more.