MITB Banner

Why Does Self-Driving Technology Always Seems Five Years Away?

The slow progress arises from approaches that require too much hand-engineering, an over-reliance on road testing, and high fleet deployment costs

Share

Why Does Self-Driving Technology Always Seems Five Years Away?

A decade ago, industry stakeholders thought fully self-driving vehicles (SDVs) would become a reality in five years. It’s 2021 already, and there are still no signs of autonomous vehicles at a scale many experts had anticipated. 

Five years ago, GM spent $581 million to acquire Cruise Automation. In 2017, GM chief Mary Barra wrote, “We expect to be the first high-volume auto manufacturer to build ‘fully-autonomous vehicles’ in a mass-production assembly plant.”

At the time, GM president Daniel Ammann, said, “When you are working on the large-scale deployment of mission-critical safety systems, the mindset of ‘move fast and break things certainly does not cut it.” 

In 2016, BMW had announced its collaboration with Intel and Mobileye to develop autonomous cars, and set an ambitious goal of getting ‘highly and fully automated driving into series production by 2021.’ However, in 2019, BMW partnered with Daimler’s Mercedes to develop Level 4 self-driving vehicles, ready to roll by 2024.

Brands like Honda, Ford, Toyota, Waymo have also made similar promises. Ironically, they all seem to have five-year projections. 

Despite the numerous successes of machine learning, self-driving technology seems to be stuck in reverse gear.

In a paper, ‘Autonomy 2.0: Why is self-driving always five years away?’ the researchers from Lyft detailed the history, composition, and development bottlenecks of the modern self-driving stack. 

SDVs are complicated

Since the DARPA Grand Challenges in 2005-2007, self-driving vehicles have been an active research area and have made headlines on a regular basis. Many companies have been attempting to develop the first level 4+ self-driving vehicles for more than a decade. 

Citing Sam Altman, Elon Musk, and Ford, the researchers said, despite the numerous unrealised predictions that ubiquitous SDVs are ‘only five years away,’ production-level deployment remains elusive. 

According to Lyft, ‘after the DARPA challenges, most of the industry decomposed the SDV technology stack into HD mapping, localisation, perception, prediction, and planning. Following breakthroughs enabled by ImageNet, the perception and prediction parts started to become primarily machine-learned. However, simulation and behaviour planning are still largely rule-based.’ 

The team believes the slow progress arises from approaches that require too much hand-engineering, an over-reliance on road testing, and high fleet deployment costs. The study noted the classical stack has several bottlenecks that preclude the necessary scale needed to capture the long tail of rare events. 

The researchers argued the current self-driving industry progress is slow due to an inefficient human-in the loop development and said these issues are solved by training a differentiable self-driving stack in a closed-loop simulation constructed out of a large collection of human driving demonstrations (aka Autonomy 2.0). 

SOTA autonomy stack (Autonomy 1.0) vs the proposed ML-first stack (Autonomy 2.0). (Source: arXiv)

The researchers believe Autonomy 2.0 unlocks the scalability required for mastering the long tail of rare events and scaling to new geographies, and calls for the need to collect large enough datasets.

However, it also comes with challenges. The critical hurdles to Autonomy 2.0, as highlighted by the researchers, include: 

  • Formulating the stack as an ‘end-to-end differentiable network’ 
  • Collecting the large amounts of ‘human driving data’ required to train them
  • Validating it offline in a ‘closed-loop’ with a machine-learning simulator 

Autonomy 1.0 

The typical Autonomy 1.0 consists of perception, prediction, planning, which consequently answer the questions like what is around the car? What is likely to happen next? And what should the car do? Finally, the most important part of the development cycle, testing, answers questions like ‘what is the performance of the system?’ 

(Source: arXiv)

Even though Autonomy 1.0 can perform well under normal conditions, attaining L4 and L-5 production-level performance requires scaling the paradigm to cover the long tail of rare events such as road closures, road accidents, other agents breaking the road rules etc. Plus, the solution needs to scale to multiple cities with diverse agent behaviours. 

According to Lyft, the bottlenecks that make it challenging for Autonomy 1.0 include: 

  • Trying to capture complex behaviours with rule-based systems
  • Reliance on road-testing and low-realism offline simulation
  • Limited fleet deployment scale

Autonomy 2.0 

Autonomy 2.0 is an ML-first approach to self-driving and a viable alternative to the currently adopted SOTA. It is based on three key principles

  • Closed-loop simulation, which is learned from the ‘collected real-world driving logs’
  • Decomposing Self-driving vehicles into an end-to-end differentiable neural network 
  • The data needed to train the ‘planner and simulator’ is collected at a large scale using commodity sensors

The approach is based on 

  • A fully differentiable ‘AV stack’ trainable from human demonstrations 
  • Closed-loop data-driven reactive simulation
  • Large-scale, low-cost data collections as ‘critical solutions’ towards scalability issues

Conclusion 

“By removing the ‘human-in-the-loop,’ this paradigm is significantly more scalable, which we argue is the main limitation for achieving ‘high self-driving vehicle performance,'” wrote Lyft researchers. 

The advancements in this direction also leave many open questions. According to Lyft, answering them becomes critical to SDVs and other real-world robotic problems.

Share
Picture of Amit Raja Naik

Amit Raja Naik

Amit Raja Naik is a seasoned technology journalist who covers everything from data science to machine learning and artificial intelligence for Analytics India Magazine, where he examines the trends, challenges, ideas, and transformations across the industry.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.