Google users contribute more than 20 million pieces of information on Maps every day – that’s more than 200 contributions every second. The uncertainty of traffic can crash the algorithms predicting the best ETA. There is also a chance of new roads and buildings being built all the time. Though Google Maps gets its ETA right most of the time, there is still room for improvement.
Researchers at Alphabet-owned DeepMind have partnered with the Google Maps team to improve the accuracy of the real-time ETAs by up to 50% in places like Berlin, Jakarta, São Paulo, Sydney, Tokyo, and Washington D.C. They are doing so by using advanced machine learning techniques, including Graph Neural Networks.
How DeepMind Worked Out A Plan
The Google Maps traffic prediction system consists of:
- a route analyser that processes terabytes of traffic information to construct Supersegments and
- a novel Graph Neural Network model, which is optimised with multiple objectives and predicts the travel time for each Supersegment.
Road networks are divided into Supersegments consisting of multiple adjacent segments of road that share significant traffic volume.
Google calculates ETAs for its Maps services by analysing live traffic data for road segments around the world. However, it doesn’t account for the traffic a driver can expect to see 10 minutes into their drive.
To address this, DeepMind researchers used Graph Neural Networks, a type of machine learning architecture for spatiotemporal reasoning. This architecture incorporated relational learning biases to model the connectivity structure of real-world road networks.
They trained a single, fully connected neural network model for every Supersegment. To deploy this at scale, millions of these models should be trained; an infrastructural overkill. Instead, the researchers decided to use Graph Neural Networks. “In modelling traffic, we’re interested in how cars flow through a network of roads, and Graph Neural Networks can model network dynamics and information propagation,” stated the company.
From Graph neural networks perspective, Supersegments are road subgraphs, sampled at random in proportion to traffic density. A single model can, therefore, be trained using these sampled subgraphs, and can be deployed at scale.
Graph Neural Networks extend the learning bias imposed by Convolutional Neural Networks and Recurrent Neural Networks by generalising the concept of “proximity” that can handle traffic on adjacent and intersecting roads as well.
This ability of Graph Neural Networks to generalise over combinatorial spaces is what grants modelling technique its power.
“…discarded as poor initialisations in more academic settings, these small inconsistencies can have a large impact when added together across millions of users.”
The researchers discovered that Graph Neural Networks are particularly sensitive to changes in the training curriculum. To solve this problem of variability in graph structures, a novel reinforcement learning technique was introduced. Regarding plasticity of the network, i.e., how open it is to new information, researchers usually decay the learning rate with time—optimising learning rates is essential when operating on a large scale basis like Google Maps.
DeepMind implemented the technique of MetaGradients to adapt the learning rate to the training. In this way, the system learnt its own learning rate schedules.
The results show that the ETA inaccuracies have decreased significantly; sometimes by more than 50% in cities like Taichung.
Google Maps Keep Getting Better
Google Maps turned 15 earlier this year, and these services keep getting better, thanks to the state-of-the-art machine learning algorithms.
In countries like India, Google has launched services that can track bus timings to route maps. Even in a densely populated neighbourhood, the models were able to predict the accurate speed of the vehicles. Not only the forecast of the duration, but these models are also designed to capture unique properties of specific streets, neighbourhoods, and cities.
While the ultimate goal of our ML models is to reduce errors in travel estimates, the DeepMind researchers found that making use of a linear combination of multiple loss functions greatly increased the ability of the model to generalise. Especially the MetaGradient, which was a result of DeepMind’s in-depth research into reinforcement learning.