A decade ago, industry stakeholders thought fully self-driving vehicles (SDVs) would become a reality in five years. It’s 2021 already, and there are still no signs of autonomous vehicles at a scale many experts had anticipated.
Five years ago, GM spent $581 million to acquire Cruise Automation. In 2017, GM chief Mary Barra wrote, “We expect to be the first high-volume auto manufacturer to build ‘fully-autonomous vehicles’ in a mass-production assembly plant.”
At the time, GM president Daniel Ammann, said, “When you are working on the large-scale deployment of mission-critical safety systems, the mindset of ‘move fast and break things certainly does not cut it.”
In 2016, BMW had announced its collaboration with Intel and Mobileye to develop autonomous cars, and set an ambitious goal of getting ‘highly and fully automated driving into series production by 2021.’ However, in 2019, BMW partnered with Daimler’s Mercedes to develop Level 4 self-driving vehicles, ready to roll by 2024.
Despite the numerous successes of machine learning, self-driving technology seems to be stuck in reverse gear.
In a paper, ‘Autonomy 2.0: Why is self-driving always five years away?’ the researchers from Lyft detailed the history, composition, and development bottlenecks of the modern self-driving stack.
SDVs are complicated
Since the DARPA Grand Challenges in 2005-2007, self-driving vehicles have been an active research area and have made headlines on a regular basis. Many companies have been attempting to develop the first level 4+ self-driving vehicles for more than a decade.
According to Lyft, ‘after the DARPA challenges, most of the industry decomposed the SDV technology stack into HD mapping, localisation, perception, prediction, and planning. Following breakthroughs enabled by ImageNet, the perception and prediction parts started to become primarily machine-learned. However, simulation and behaviour planning are still largely rule-based.’
The team believes the slow progress arises from approaches that require too much hand-engineering, an over-reliance on road testing, and high fleet deployment costs. The study noted the classical stack has several bottlenecks that preclude the necessary scale needed to capture the long tail of rare events.
The researchers argued the current self-driving industry progress is slow due to an inefficient human-in the loop development and said these issues are solved by training a differentiable self-driving stack in a closed-loop simulation constructed out of a large collection of human driving demonstrations (aka Autonomy 2.0).
SOTA autonomy stack (Autonomy 1.0) vs the proposed ML-first stack (Autonomy 2.0). (Source: arXiv)
The researchers believe Autonomy 2.0 unlocks the scalability required for mastering the long tail of rare events and scaling to new geographies, and calls for the need to collect large enough datasets.
However, it also comes with challenges. The critical hurdles to Autonomy 2.0, as highlighted by the researchers, include:
- Formulating the stack as an ‘end-to-end differentiable network’
- Collecting the large amounts of ‘human driving data’ required to train them
- Validating it offline in a ‘closed-loop’ with a machine-learning simulator
The typical Autonomy 1.0 consists of perception, prediction, planning, which consequently answer the questions like what is around the car? What is likely to happen next? And what should the car do? Finally, the most important part of the development cycle, testing, answers questions like ‘what is the performance of the system?’
Even though Autonomy 1.0 can perform well under normal conditions, attaining L4 and L-5 production-level performance requires scaling the paradigm to cover the long tail of rare events such as road closures, road accidents, other agents breaking the road rules etc. Plus, the solution needs to scale to multiple cities with diverse agent behaviours.
According to Lyft, the bottlenecks that make it challenging for Autonomy 1.0 include:
- Trying to capture complex behaviours with rule-based systems
- Reliance on road-testing and low-realism offline simulation
- Limited fleet deployment scale
Autonomy 2.0 is an ML-first approach to self-driving and a viable alternative to the currently adopted SOTA. It is based on three key principles
- Closed-loop simulation, which is learned from the ‘collected real-world driving logs’
- Decomposing Self-driving vehicles into an end-to-end differentiable neural network
- The data needed to train the ‘planner and simulator’ is collected at a large scale using commodity sensors
The approach is based on
- A fully differentiable ‘AV stack’ trainable from human demonstrations
- Closed-loop data-driven reactive simulation
- Large-scale, low-cost data collections as ‘critical solutions’ towards scalability issues
“By removing the ‘human-in-the-loop,’ this paradigm is significantly more scalable, which we argue is the main limitation for achieving ‘high self-driving vehicle performance,'” wrote Lyft researchers.
Subscribe to our NewsletterGet the latest updates and relevant offers by sharing your email.
Amit Raja Naik is a senior writer at Analytics India Magazine, where he dives deep into the latest technology innovations. He is also a professional bass player.