This Machine Learning Model Identified Tesla’s Cybertruck As A Refrigerator; Jokes Apart, Should We Be Worried?
When it comes to autonomous vehicles, Tesla tops the charts and has become quite popular over the past decade. With its latest launch of Cybertruck, CEO Elon Musk made has once again made it clear that he doesn’t mind taking risks. And, just like his previous such whirlwind successes, this new futuristic pickup truck is been being talked about across the globe with zero ad campaign.
Elon Musk last month did what he does best when he released his new Tesla Cybertruck and it looked like it came straight out of movies like Blade Runner (2017).
Ever since the launch, people’s opinions have been divided on whether the design is good or bad, or whether it will work great or not. Regardless of the negativity surrounding this release, the bookings have skyrocketed and this hints at the demand for something wild and authentic.
However, when it comes to machine learning models, any data that is wild and new can derail the model.
Recently, Andrew Ng’s AI edtech company, Deeplearning.ai, posted results of how machine learning models that were tasked with identifying Cybertruck, mistook it for being a grille, amphibian, refrigerator among many others. Though this was presumably done in jest, one can’t help wonder how grave are the results when the hilarity subsides.
The fact that there are no fully autonomous cars available on the road today, hints at the lack of confidence in the current object detection models.
Tesla, which pioneers in computer vision for automobiles, have in the past, demonstrated how tricky it is to keep the onboard ML model intelligent in case of anomalies.
Inspired by deeplearing.ai’s post, we too ran some classification models using IBM’s Max Inception ResNet Model V2 on available Cybertruck models and this is what we’ve got:
In the above five examples, we can see that the model though predicts some kind of automobile sometimes, what is interesting is the classification of amphibians with decent scores.
This object detection is just a fun experiment and there is no doubt in the capabilities of Musk and his team as they have made exemplary contributions to the automotive industry while democratising their technology.
Taking The Joke Too Far
There is hardly any room for mistakes in case of driverless cars. AI can’t be made to decide on real-world trolley problems even if it is devoid of any emotion and would only opt for the most logical or most rewarding strategy. The moral, social and ethical dilemmas would make it difficult for driverless cars even if they make human-like errors.
Though it would be painstakingly difficult for both the sceptics and other entities like insurance companies, self-driving cars are coming.
In all seriousness, the object detection model still suffers from adversities in the real world. Researchers try to tweak the state-of-the-art models for better results and many usually hit the wall by concluding that a model is only as good as the data it is fed.
From a practitioner perspective, there is no grey area when it comes to cancer diagnosis or self-driving cars. The results should be perfect or the technique itself would be shelved.
The demand for near-perfect results owes in large to the scepticism of the commuters.
An adversarial attacker could target autonomous vehicles by using stickers or paint to create an adversarial stop sign that the vehicle would interpret as a ‘yield’ or other sign. A confused car on a busy day is a potential catastrophe packed in a 2000 pound metal box. And since something like a Cybertruck that is nothing short of an aesthetic tank would pose more problems with
For example, in a paper released by the researchers at the University of Michigan in collaboration with Baidu research and the University of Illinois, make an attempt to expose the shortcomings of LiDAR-based autonomous driving detection systems.
The above picture shows the car mounted with LiDAR system and the adversarial object on the right, the edges of which, resemble something similar to those of Cybertruck. This object is inserted into the simulated environment where the detection system was tested.
The team behind this work has also 3D-printed the adversarial objects and performed physical experiments with LiDAR equipped cars to illustrate the effectiveness of LiDAR-Adv.
The generated adversarial objects are tested on the Baidu Apollo autonomous driving platform to demonstrate whether physical systems are vulnerable to the proposed attacks.
So, can Cybertrucks help LiDar achieve new benchmarks or are these new machines just the beginning of a whole new era of automotive design that would trouble the vision models more? In the latter case, the data collected so far to explicitly identify automobiles will have to be curated. A better solution would be exploring advanced strategies where models are intentionally being induced with adversarial examples to keep them robust and perform well in case of unlabelled data or any black swan event in the real world.