Does Microsoft Cognitive Toolkit Really Lag Behind TensorFlow And PyTorch In Deep Learning?

Design by Frank Seide, left, and Chris Basoglu, right, have been key to building Microsoft Cognitive Toolkit. (Photography by Scott Eklund/Red Box Pictures)

The arrival of deep learning frameworks in the public domain has kick-started a framework war of sorts. In this article, we discuss how Microsoft Cognitive Toolkit, previously known as CNTK, stacks up against the ever-popular TensorFlow and PyTorch. While Google’s TensorFlow is immensely popular among developers and is also known for its better documentation, Microsoft open-sourced its own ML frameworks with LightGBM.

CNTK, rechristened Microsoft Cognitive Toolkit in 2015, has now cornered a strong market performance in both accuracy and speed even when compared to TensorFlow. It was initially geared at the speech research community. In 2017, the tech giant launched the second version of Cognitive Toolkit 2.0 which allows implementing advanced neural network models efficiently. Even though Microsoft Cognitive Toolkit started later than Google, it has gained popularity and is linked to Azure toolkits as well. According to a section of users, Microsoft Cognitive Toolkit has been voted for its efficient implementations for LSTMs and CNNs.

Microsoft Cognitive Toolkit vs PyTorch vs TensorFlow

Data scientist Max Woolf said that one of the key features of the Microsoft Cognitive Toolkit 2.0 is its compatibility with Keras. It works well for RNN, CNN for text, image and speech workloads. Designed for speed and efficiency, it scales well in production, in fact, it has the ability to scale across GPUs. It also has a tighter integration with numpy. However, Microsoft Cognitive Toolkit has limited support from the community as compared to TensorFlow which is widely popular, thanks to Google’s heavy marketing.

Subscribe to our Newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

On the other hand, PyTorch enables users to develop their model dynamically, thereby lending more control over the development of new architectures. The common workflow is to develop and research with PyTorch, but the final production code is done in TensorFlow and finally deployed on TensorFlow as well.




As PyTorch is developed by Facebook, it leverages dynamic graphs that implies developing a new computational graph on each forward pass. In fact, the main driver of PyTorch is the dynamic computational graphs. Just like PyTorch, it is deeply integrated with Python and follows an object-oriented paradigm. In fact, dynamic graph frameworks are less invasive and easily integrate with the programming language used. Users emphasise that in PyTorch it is easy to write clean code which is debuggable.

Meanwhile, even though TensorFlow doesn’t scale as well as Microsoft Cognitive Toolkit, it is over-engineered (as pointed out by users) and doesn’t have the most flexible library, it nonetheless enjoys a great community support, thanks to being heavily marketed. Microsoft Cognitive Toolkit scales better than TensorFlow, it can even scale to 2000 GPUs, but what it lacks is a great community support like TensorFlow or PyTorch. Even though Microsoft Cognitive Toolkit framework is more mature and has a better interface, TensorFlow wins hands down, thanks to fanboyism.

According to AI Scientist Jesus Rodriguez, no single DL framework is good at all the tasks. For example, TensorFlow works well on NLP tasks and image analysis models while Microsoft Cognitive Toolkit is efficient at speech recognition. Other frameworks like Caffe are immensely popular among computer vision researchers.

Deep Learning Market Size

A recent market report by Grand View Research, the deep learning market size is expected to touch $10.2 billion by 2025. What is fueling this growth is chip advancement and the increasing GPU-accelerated applications that have led to the widespread adoption of open-sourced DL frameworks. Another key area is that organisations are realising the need to extract valuable insights from data and develop better customer-centric products. Another research firm, Stratistics MRC, indicated that DL has exponential growth opportunity and the technology will be heavily utilised in mobile devices and healthcare sector, specifically for medical image analysis. In fact, deep learning technology will also play a pivotal role in the manufacturing industry with the DL being leveraged for powering machine vision systems, industrial robots and improve production cycle. By 2023, deep learning technology is set to see the highest growth in manufacturing.  

Richa Bhatia
Richa Bhatia is a seasoned journalist with six-years experience in reportage and news coverage and has had stints at Times of India and The Indian Express. She is an avid reader, mum to a feisty two-year-old and loves writing about the next-gen technology that is shaping our world.

Download our Mobile App

MachineHack

AI Hackathons, Coding & Learning

Host Hackathons & Recruit Great Data Talent!

AIM Research

Pioneering advanced AI market research

Request Customised Insights & Surveys for the AI Industry

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Strengthen Critical AI Skills with Trusted Corporate AI Training

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

AIM Leaders Council

World’s Biggest Community Exclusively For Senior Executives In Data Science And Analytics.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR