Active Hackathon

Why Tech Giants Are Pinning Their AI Strategy On Deep Learning Frameworks

Deep Learning frameworks

Deep Learning frameworksThere’s one aspect that has affected the growth of deep learning research — the proliferation of deep learning frameworks. Popular Deep Learning frameworks such as TensorFlow (Google), PyTorch (one of the newest frameworks that is rapidly gaining popularity), Caffe, MXNet and Keras among others have helped DL researchers achieve human-level efficiencies on tasks such as facial recognition, image classification, object detection, sentiment detection among other tasks. While multiple frameworks for deep learning is great news for the developer community, it is also a part of the marketing pitch to get them to lock the developer base into other solutions (selling compute capability).  

  • Each of these frameworks was designed to solve a specific problem
  • After reaching a certain maturity, the frameworks were open sourced 

What started as an attempt to plug in internal requirements for projects has become a full-fledged strategy to improve and capitalise on the overall AI technology stack that comprises of algorithms, infrastructure and hardware. Given how AI is going to become a foundational technology, leading technology majors are laying down better AI infrastructure approaches by providing DL frameworks that provide reliability and ease of deployment. 


Sign up for your weekly dose of what's up in emerging technology.

But while this approach is helping remove the stumbling blocks developers face when it comes to large-scale deployments, it has also become a go-to strategy for companies to monetise the resources (compute capability) required for deep learning. 

 AI industry takes a customised approach to hardware optimised for a specific framework

Frameworks is one part of the puzzle to own the entire AI technology stack. For example, Google’s TensorFlow, the most popular framework is optimised Tensor Processing Unit (machine learning accelerator) and the TPU is further designed for the cloud. This in a way, will help Google own the burgeoning cloud infrastructure ecosystem with GCP. 

Meanwhile, Facebook’s PyTorch pegged as one of the most unified AI frameworks works with a broad array of hardware solutions from NVIDIA, Intel, ARM and others. The compatibility with a range of hardware – chips and accelerators has led to PyTorch’s soaring popularity (one of the newest entrants in DL framework race). From Google to Microsoft, tech majors have added support for PyTorch on hardware and cloud which has made it to one of the best and most accessible platforms for building AI applications. 

On the other end of the spectrum, CNTK Microsoft Cognitive Toolkit from Microsoft hasn’t won much ground as compared to Facebook’s PyTorch or Google’s offering. 

MXNet by Apache is Amazon’s preferred deep learning framework and in certain cases, is also known to perform faster than TensorFlow. This framework delivers substantial speedups, especially when computations are performed on a GPU. Meanwhile, Amazon has also reportedly taken a DIY route and built its own chip which is used in its data centres. 

Need for consolidation across DL frameworks

While there are multiple frameworks with their own APIs, representations and execution engine, what they lack is interoperability. Another key roadblock is that not all frameworks support other computational devices across multiple machines. This means that distributed implementations are not possible with DL frameworks. As the AI ecosystem grows, companies should focus on building an interface that integrates well across frameworks and can be extended to different hardware as well. 

What’s required is a common interface and consolidation across different deep learning frameworks. In a similar vein, Intel is now trying to blunt the advance of AI incumbents – Google, AWS, Microsoft with its own set of software tools and specialised hardware. What the chip giant has proposed is a ‘platform that makes deep learning work everywhere’. Known as PlaidML, it is an advanced and portable tensor compiler for deep learning on end devices such as laptops. 

As per GitHub documentation, PlaidML sits below common machine learning frameworks such as Keras, ONNX and nGraph and allows developers to use any hardware supported by it. PlaidML works well on GPUs without the need for CUDA and delivers comparable performance, just like Nvidia hardware. When combined with nGraph compiler, it also substantially expands the deep learning capabilities and works well across Intel’s diverse hardware portfolio. 

Where’s the industry heading 

Interestingly, what fuelled the rise of deep learning was AlexNet, a convolutional neural network built in 2012 which was the winning entry in The ImageNet Large Scale Visual Recognition Challenge (ILSVRC). In 2012, Alex Krizhevsky released AlexNet which was a deeper version of LeNet (one of the first CNNs built in 1994 by AI pioneer Yann LeCun). 

Since then, the adoption of deep learning techniques across image, text, video and NLP has led to a massive growth of heterogeneous hardware, built from the ground up for specific applications. This specialised hardware, for example, FPGAs are expected to outperform GPUs for specific tasks and are optimised for one framework. 

Given how hardware has become the hotspot for innovation with semiconductor companies and tech giants chasing custom silicon, companies should work across building interoperability in frameworks and a shared infrastructure that allows developers to tune performance across different hardware and enables resource sharing as well. 


More Great AIM Stories

Richa Bhatia
Richa Bhatia is a seasoned journalist with six-years experience in reportage and news coverage and has had stints at Times of India and The Indian Express. She is an avid reader, mum to a feisty two-year-old and loves writing about the next-gen technology that is shaping our world.

Our Upcoming Events

Conference, in-person (Bangalore)
Cypher 2022
21-23rd Sep

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM

Council Post: Enabling a Data-Driven culture within BFSI GCCs in India

Data is the key element across all the three tenets of engineering brilliance, customer-centricity and talent strategy and engagement and will continue to help us deliver on our transformation agenda. Our data-driven culture fosters continuous performance improvement to create differentiated experiences and enable growth.

Ouch, Cognizant

The company has reduced its full-year 2022 revenue growth guidance to 8.5% – 9.5% in constant currency from the 9-11% in the previous quarter