How PyTorch Is Challenging TensorFlow Lately

Google’s TensorFlow  and Facebook’s PyTorch are the most popular machine learning frameworks. The former has a two-year head start over PyTorch (released in 2016). TensorFlow’s popularity reportedly declined after PyTorch bursted into the scene. However, Google released a more user-friendly TensorFlow 2.0 in January 2019 to recover lost ground. 

Interest over time for TensorFlow (top) and PyTorch (bottom) in India (Credit: Google Trends)

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

PyTorch–a framework for deep learning that integrates with important Python add-ons like NumPy and data-science tasks that require faster GPU processing–made some recent additions:

  • Enterprise support: After taking over the Windows 10 PyTorch library from Facebook to boost GPU-accelerated machine learning training on Windows 10’s Subsystem for Linux(WSL), Microsoft recently added enterprise support for PyTorch AI on Azure to give PyTorch users a more reliable production experience. “This new enterprise-level offering by Microsoft closes an important gap. PyTorch gives our researchers unprecedented flexibility in designing their models and running their experiments,” Jeremy Jancsary, a senior principal research scientist at Nuance, said.
  • PyTorchVideo is a deep learning library for video understanding unveiled by Facebook AI recently. The source code is available on GitHub. With this, Facebook aims to support researchers develop cutting-edge machine learning models and tools. These models can enhance video understanding capabilities along with providing a unified repository of reproducible and efficient video understanding components for research and production applications. 
  • PyTorch Profiler: In April this year, PyTorch announced its new performance debug profiler, PyTorch Profiler, along with its 1.8.1 version release. The new tool enables accurate and efficient performance analysis in large scale deep learning models.

Research

PyTorch is emerging as a leader in terms of papers in leading research conferences.  “My analysis suggests that researchers are abandoning TensorFlow and flocking to PyTorch in droves,” said Horace He, in a 2019 The Gradient report.

During the NerulIPS conference in 2019, PyTorch appeared in 166 papers as opposed to TensorFlow’s 74. 

Source: https://chillee.github.io/pytorch-vs-tensorflow/

Ease of Use

PyTorch’s style is considered more object-oriented. This makes implementing the model less time-consuming. Moreover, the specification of data handling is a lot more direct in PyTorch as compared to TensorFlow. It also integrates with the rest of the Python ecosystem easily. However, in TensorFlow, debugging the model is tricky and needs more dedicated time. Pytorch has CPU and GPU control; is more pythonic in nature; and is easy to debug.

Performance

The following table compares a single-machine eager mode performance of PyTorch with the graph-based deep learning Framework TensorFlow. It displays the training speed for the two models using 32 bit floats. Throughput is measured in images per second for VGG-19, AlexNet, ResNet-50 and MobileNet models. This is done in tokens per second for the GNMTv2 model, whereas in samples per second for the NCF model.  

The performance of PyTorch is better compared to TensorFlow. “This can be attributed to the fact that these tools offload most of the computation to the same version of the cuDNN and cuBLAS libraries,” according to a report

PyTorch vs TensorFlow (Credit: PyTorch: An Imperative Style, High-Performance Deep Learning Library)

Dynamic 

The way in which these frameworks define the computational graphs makes a key difference. TensorFlow creates a static graph, whereas PyTorch bets on the dynamic graph. This means, in TensorFlow, developers have to run ML models only after they define the entire computation graph of the model. However, in PyTorch, they can manipulate or define graphs quickly on the go. This, experts believe, comes in handy when they use variable-length inputs in RNNs. PyTorch supporting these dynamic computational graphs means the network behaviour can be changed programmatically at runtime.

TorchScript

Pytorch’s TorchScript enables a way to create serializable models from python code. TorchScript is a subset of Pytorch that helps in deploying applications at scale. A regular PyTorch model can be turned into TorchScript by using tracing or script mode. This makes optimizing the model easier and gives PyTorch an edge over other machine learning frameworks.

  • Tracing: It takes a function and an input, records the operations executed with the input and constructs the IR. 
  • Script mode: It takes a function or class, reinterprets the Python code and directly outputs the TorchScript IR, allowing it to support arbitrary code.

API

Though both are extended by an array of APIs, PyTorch’s API is preferred over TensorFlow’s API because it is better designed. This is also because TensorFlow has frequently switched APIs many times, giving an edge to PyTorch.

Learning curve

Experts believe with TensorFlow, one has to learn a little more about its working including session and placeholders. PyTorch, on the other hand, is more pythonic and it offers a more intuitive way of building ML models. So, TensorFlow is a bit more time-consuming and difficult to learn compared to PyTorch.

Wrapping up

  • Pytorch enables easier implementation as compared to TensorFlow which offers multiple ways to do one thing
  • TensorFlow weaves in too many features or frameworks, sometimes creating incompatibility issues
  • PyTorch offers more flexibility 
  • Debugging is easier in PyTorch as compared to TensorFlow
  • Lastly, PyTorch can be learnt faster because of its simplicity and fewer updates

More Great AIM Stories

Shanthi S
Shanthi has been a feature writer for over a decade and has worked in several print and digital media companies. She specialises in writing company profiles, interviews and trends. Through her articles for the Analytics India Magazine, she aims to humanise tech in India. She is also a mom and her favourite pastime is playing a game of monopoly or watching Gilmore Girls with her daughter.

Our Upcoming Events

Masterclass, Virtual
How to achieve real-time AI inference on your CPU
7th Jul

Masterclass, Virtual
How to power applications for the data-driven economy
20th Jul

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, Virtual
Deep Learning DevCon 2022
29th Oct

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM
MOST POPULAR

What can SEBI learn from casinos?

It is said that casino AI technology comes with superior risk management systems compared to traditional data analytics that regulators are currently using.

Will Tesla Make (it) in India?

Tesla has struggled with optimising their production because Musk has been intent on manufacturing all the car’s parts independent of other suppliers since 2017.