MITB Banner

Can MXNet Stand Up To TensorFlow & PyTorch?

“MXNet, born and bred here at CMU, is the most scalable framework for deep learning I have seen and is a great example of what makes this area of computer science so beautiful – that you have different disciplines which all work so well together: imaginative linear algebra working in a novel way with massive distributed computation leading to a whole new ball game for deep learning,”  said Andrew Moore, former dean of Computer Science at the Carnegie Mellon University.

Developed by the Apache Software Foundation, MXNet is a fully-featured, flexible, and scalable open-source deep learning framework. In a short time, MXNet has emerged as one of the strong contenders to industry favourite frameworks such as TensorFlow and PyTorch. Notably, Amazon uses it for its deep learning web services. 

Born in academia

In 1986, David Rumelhart, Geoffery E, and Ronald J Williams introduced the backpropagation learning algorithm to train neural networks. However, neural networks remained a neglected area in the following years as logistic regression and support vector machines (SVMs) started picking up momentum.

However, the datasets started growing significantly in the 90s. Storage and high network bandwidth became much more affordable, making it easier to work with big data. Neural networks are most helpful in pattern recognition problems associated with huge datasets. Naturally, neural networks started replacing previously dominant Markov models. GPUs and clusters presented themselves as a good way out for accelerated neural networks training. However, the problem was, familiar scientific computing stacks such as Matlab, R, or NumPy were not good enough to take complete advantage of these distributed resources (GPUs and clusters).

Enter MXNet. It offers powerful tools for developers to exploit the full capabilities of GPUs and cloud computing. MXNet defines, trains, and deploys deep neural networks.

The development of MXNet is rooted in academia. It was first introduced in a paper titled ‘MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems’ authored by researchers from ten institutions, including Carnegie Mellon University and University of Washington, and Stanford. From the beginning, MXNet supported programming languages such as C++, Python, R, Scala, Matlab, and Javascript.

Features of MXNet

MXNet stands for mix-net since it has been developed by combining several programming approaches into one. It supports languages such as Python, R, C++, Perl, and Julia. MXNet fits in small amounts of memory and ipso facto can be deployed to mobile devices or smaller systems.

Features of MXNet include:

  • Offers multi-GPU and distributed training like other frameworks such as TensorFlow and PyTorch.
  • Offers greater flexibility in machine learning development and lets the developer export a neural for inference in up to eight different languages.
  • Has a large set of libraries to support applications in computer vision and natural language processing.
  • Has a large community of users who interact with its GitHub repository and other forums.

Amazon AWS is one of the most popular use cases of MXNet. Amazon chose MXNet for three reasons:

  • Development speed and programmability
  • Portable enough to run on a broad range of devices and platforms and locations with different network facilities
  • Scalable to multiple GPUs to train larger and more sophisticated models with bigger datasets.

MXNet vs TensorFlow & PyTorch

So how does MXNet stack up against TensorFlow and PyTorch?

MXNet scores big on two fronts–ease of learning and speed.

Speaking of ease of learning, TensorFlow is relatively unfriendly as its interface changes after every update. PyTorch has easier and flexible applications, pursues fewer packages, and supports simple codes. Unlike TensorFlow, PyTorch can make good use of the main language, Python. On the other hand, MXNet supports both imperative and declarative languages, is highly flexible, offers a complete training module, and supports multiple languages.

MXNet offers faster calculation speeds and resource utilisation on GPU. In comparison, TensorFlow is inferior; however, the latter performs better on CPU.

However, speaking of popularity, PyTorch and TensorFlow are still miles ahead, grabbing the top positions. The reason behind this is the availability of high-level APIs and the ease of customisation of deep learning models. Additionally, TensorFlow and Pytorch enjoy vibrant and extensive community support; meaning, newer updates are readily available.

Credit: Rise Labs

Based on the number of mentions in arxiv.org papers in 2018 and 2018, TensorFlow and Pytorch ranked in the top two, whereas MXNet stood at the sixth position.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Shraddha Goled

Shraddha Goled

I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories