“MXNet, born and bred here at CMU, is the most scalable framework for deep learning I have seen and is a great example of what makes this area of computer science so beautiful – that you have different disciplines which all work so well together: imaginative linear algebra working in a novel way with massive distributed computation leading to a whole new ball game for deep learning,” said Andrew Moore, former dean of Computer Science at the Carnegie Mellon University.
Developed by the Apache Software Foundation, MXNet is a fully-featured, flexible, and scalable open-source deep learning framework. In a short time, MXNet has emerged as one of the strong contenders to industry favourite frameworks such as TensorFlow and PyTorch. Notably, Amazon uses it for its deep learning web services.
Born in academia
In 1986, David Rumelhart, Geoffery E, and Ronald J Williams introduced the backpropagation learning algorithm to train neural networks. However, neural networks remained a neglected area in the following years as logistic regression and support vector machines (SVMs) started picking up momentum.
However, the datasets started growing significantly in the 90s. Storage and high network bandwidth became much more affordable, making it easier to work with big data. Neural networks are most helpful in pattern recognition problems associated with huge datasets. Naturally, neural networks started replacing previously dominant Markov models. GPUs and clusters presented themselves as a good way out for accelerated neural networks training. However, the problem was, familiar scientific computing stacks such as Matlab, R, or NumPy were not good enough to take complete advantage of these distributed resources (GPUs and clusters).
Enter MXNet. It offers powerful tools for developers to exploit the full capabilities of GPUs and cloud computing. MXNet defines, trains, and deploys deep neural networks.
Features of MXNet
MXNet stands for mix-net since it has been developed by combining several programming approaches into one. It supports languages such as Python, R, C++, Perl, and Julia. MXNet fits in small amounts of memory and ipso facto can be deployed to mobile devices or smaller systems.
Features of MXNet include:
- Offers multi-GPU and distributed training like other frameworks such as TensorFlow and PyTorch.
- Offers greater flexibility in machine learning development and lets the developer export a neural for inference in up to eight different languages.
- Has a large set of libraries to support applications in computer vision and natural language processing.
- Has a large community of users who interact with its GitHub repository and other forums.
Amazon AWS is one of the most popular use cases of MXNet. Amazon chose MXNet for three reasons:
- Development speed and programmability
- Portable enough to run on a broad range of devices and platforms and locations with different network facilities
- Scalable to multiple GPUs to train larger and more sophisticated models with bigger datasets.
MXNet vs TensorFlow & PyTorch
So how does MXNet stack up against TensorFlow and PyTorch?
MXNet scores big on two fronts–ease of learning and speed.
Speaking of ease of learning, TensorFlow is relatively unfriendly as its interface changes after every update. PyTorch has easier and flexible applications, pursues fewer packages, and supports simple codes. Unlike TensorFlow, PyTorch can make good use of the main language, Python. On the other hand, MXNet supports both imperative and declarative languages, is highly flexible, offers a complete training module, and supports multiple languages.
MXNet offers faster calculation speeds and resource utilisation on GPU. In comparison, TensorFlow is inferior; however, the latter performs better on CPU.
However, speaking of popularity, PyTorch and TensorFlow are still miles ahead, grabbing the top positions. The reason behind this is the availability of high-level APIs and the ease of customisation of deep learning models. Additionally, TensorFlow and Pytorch enjoy vibrant and extensive community support; meaning, newer updates are readily available.
Based on the number of mentions in arxiv.org papers in 2018 and 2018, TensorFlow and Pytorch ranked in the top two, whereas MXNet stood at the sixth position.
Join Our Telegram Group. Be part of an engaging online community. Join Here.
Subscribe to our NewsletterGet the latest updates and relevant offers by sharing your email.
I am a journalist with a postgraduate degree in computer network engineering. When not reading or writing, one can find me doodling away to my heart’s content.