Is Tesla Dumping Python For This Programming Language

The only phenomena that matched the growth of artificial intelligence in recent years is that of Python. Python has become the go-to language for many organisations who are setting up data science and machine learning departments. The transition to Python was so rapid that many programming languages were thought to have gone obsolete. 

However, Elon Musk, CEO of Tesla, in a series of tweets, has announced how serious Tesla is to bring in great minds together to work on their AI-related projects. He has also announced a house party invite for the AI enthusiasts to participate in the hackathon. 

“Educational background is irrelevant”

Elon Musk

Although the neural networks for computer vision models were written in Python, he added, the Tesla team would need people with excellent coding skills, especially in C and C++.

C/C++ for building self-driving cars might sound weird, but Musk’s tweet does raise some doubts regarding the hype around Python.

This didn’t go well with the developers who pointed out the pitfalls of infrastructure complexity. 

However, a tweet cannot be taken at face value. The information is often condensed, and Soumith Chintala, co-creator of PyTorch, has shed some light on what Musk really might have meant. He explained that converting to C++ doesn’t mean hand-rewrite in C++, but auto-converting to their low-level runtime.

He also added that Tesla team has its own ASIC, sensors, etc., which probably has its tooling, drivers, staged IR, compiler.

The C++ language also facilitates direct mapping of hardware features and zero-overhead abstractions based on those mappings.

The Curse of Tool Idolation


Most of the popular machine learning frameworks such as TensorFlow, Pytorch, and even CUDA rely on C ++. 

As shown above, CUDA is more of a toolkit than a programming language that provides extensions to the developers who work with C/C++ to express massive amounts of parallelism and direct the compiler to the portion of the application that maps to the GPU.

Similarly, Python too is an interface that allows one to interact and leverage ML features without the need to learn the nitty-gritty of C++.

Python is used mostly as an interface. This arrangement is made so that more developers from non- coding backgrounds can come on board and build ML applications.

Python is easy to learn, and most of its popularity stems from this fact alone. However, if one tries to scratch the surface, they would find it easy to use APIs and interfaces that are shouldered by the likes of traditional C ++ language.

With optimised GPU libraries like BLAS and computer vision libraries like OpenCV. Everything that needs speed is written in C++ with Python bindings. 

Unlike in C++, Python users can write a convolutional neural network from scratch under 50 lines. C++ requires the knowledge of a few intricacies, which is a big no to the newcomers. Time is critical here. For example, a physicist who is incorporating ML tools would prefer something lightweight and straightforward as Python. However, C++ does all the heavy lifting (read matrix multiplication) in the background of the libraries and frameworks.  

According to the PyTorch team, C++ in the front end enables research in environments in which Python cannot be used, or is not the right tool for the job. The advantages were summarised as follows:

  • If one wants to do reinforcement learning research in a pure C++ game engine with high frames-per-second and low latency requirements, using a pure C++ library is a much better fit to such an environment than a Python library.
  • Due to the Global Interpreter Lock (GIL), Python cannot run more than one system thread at a time. Multiprocessing is an alternative, but not as scalable and has significant shortcomings. C++ has no such constraints and threads that are easy to use and create.
  • C++ at frontend will allow the user to remain in C++, eliminating the need to switch back and forth between Python and C++ during training.

However, Python still may not be tractable for research work such as reinforcement learning projects because of the slowness of the Python interpreter. Therefore a C++ library would be the right fit.

Whereas, in the case of TensorFlow, for the most part, is a combination of highly-optimised C++ and CUDA. In turn, by using Eigen (a high-performance C++ and CUDA numerical library) and NVidia’s cuDNN optimised deep neural network library for functions such as convolutions).

Choosing any language or tool boils down to the trade-off between ease of execution and latency. For domain experts in ML, knowledge of C++ is too much to ask. They can get going with Python while C++ developers write code to interact with the machine. This arrangement does the job quite well for many organisations. It makes sense why Tesla’s AI team needs an army of both Python and C++ developers to build the next generation autonomous products.

Download our Mobile App

Ram Sagar
I have a master's degree in Robotics and I write about machine learning advancements.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox