Tesla has always had a unique approach towards self-driving cars. The electric car company has been developing Computer Vision and Synthetic Neural Networks to solve the challenges associated with self-driving cars. While industry giants like Toyota, Google, Uber, Ford and General Motors all have been working with Lidar, Tesla has always proclaimed that Lidar will never be the approach they solve this problem.
Founder Elon Musk famously said, “Lidar is a fool’s errand, and anyone relying on Lidar is doomed”.
But what exactly is Lidar’s flaw and computer vision’s most considerable edge?
Sign up for your weekly dose of what's up in emerging technology.
Lidar is exceptionally efficient at measuring distances even up to millimetres, but the technology is not as efficient in moving objects. According to Andrej Karpathy, Senior Director of Artificial Intelligence at Tesla, Lidar cannot differentiate between a plastic bag and a road bump. This would cause safety concerns as the car would be required to slow down on a speed bump, but it can easily pass plastic pieces. Moreover, until now, the technology has been quite expensive and making a car that requires multiple Lidars would be cost-ineffective.
Despite the flaws, the biggest challenge with Lidar is the 3D HD maps– a technology without which it cannot work. These maps create a 3D view of the streets, required for safety and autonomy, and can only work in combination with 3D HD maps which Google constantly updates. However, mapping each centimetre of every street on earth is a resource-intensive and monetarily expensive task. It implies that Lidar-based vehicles can operate only in areas mapped out previously, significantly limiting autonomous vehicles’ reach.
Tesla’s unorthodox approach
Tesla is the most significant electric vehicle manufacturing company globally, and its approach is completely contrasting to its competitors. The company wants its cars to see and navigate streets the same way humans do. Unlike its competitors, Tesla does not use Lidar. Instead, it develops cars with the Advanced Driver Assistance System(ADAS) or semi-auto pilot. It works on an integrated camera and radar system (recent upgrades only have cameras). The combination supplies input to the algorithm that creates a map of the surroundings using computer vision and uses artificial intelligence-based algorithms to decide its reactions– almost like reverse engineering human vision.
Tesla cars use an array of eight cameras and radars– for self-driving, self-parking, lane centering, adaptive cruise controlling, and lane changing. Its software is based on deep learning algorithms that try to develop advanced neural networks in a function analogous to human vision, needing only video inputs from the car’s surroundings. These neural networks analyse the videos for roads, signs, people, speed bumps, obstacles, and other vehicles.
The main argument against pure computer vision is that we do not know if neural networks can accurately detect range and estimate depth without Lidar and radar. To counter the problem, Tesla has been training its neural networks on video datasets collected from its cars worldwide. These videos are labelled with the help of automated algorithms, with human supervision.
Andrej elaborated on it at the Computer Vision and Pattern Recognition Conference 2021. He told the audience that during the development process of the dataset for algorithm training, the team identified more than 200 triggers insinuating that object detection needs adjustment. These triggers caused inconsistencies between detection results. As a result, Tesla had to work for four months to fix every single trigger. The company is also using supercomputers to train and fine-tune its deep learning models.
The company’s primary aim is to develop an autonomous computer general vision analogous to human vision and an algorithm that can make vehicles entirely autopilot.
Tesla looks over it as a supervised learning problem now. It has made its convolutional neural networks better, generating hopes that future cars may entirely be able to run on Computer Vision. On the other hand, the progress in Lidar technology has not been as exciting. Except for the falling prices of Lidar systems, the technology has not made much progress in mapping or its other flaws. Therefore, it will be exciting to see how well Tesla’s approach pans out in the future and if Lidar can innovate itself to eliminate its shortcomings.