Now Reading
Do Deep Learning Networks Scale Effectively?

Do Deep Learning Networks Scale Effectively?

Can deep learning (DL) be deployed at scale? The “Big Three” of DL — Yann LeCun, Geoffrey Hinton and Yoshua Bengio — have defined DL as computational models made up of multiple processing layers to learn patterns from data. Over a period of time, DL techniques have exponentially improved speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics.

Deep Learning DevCon 2021 | 23-24th Sep | Register>>

DL discovers patterns in large datasets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer, from the representation in the previous layer. Deep convolutional nets (DNNs) have brought about breakthroughs in various applications, like, image processing, speech recognition and audio, whereas recurrent nets shed light on sequential data like text and speech.

The revival and hype has been promulgated by a group of researchers who strongly believe that DL will have many more successes in the near future because it requires very little engineering by hand. They theorise that DL can thus easily take advantage of increases in the amount of available computation and data. On the other hand, AI’s leading researchers have heralded DL as a revolutionary technique which can scale effectively.  

A majority of organisations such as Google, Amazon, Microsoft, Alibaba and Baidu, among others, deploy DL for mission-critical problems. And since DL requires massive computational power, they do it with hardware that’s available on premises.

Looking for a job change? Let us help you.

Is Deep Learning Successful?

A counterpoint by Filip Pieniewski and other researchers working on computer vision and AI.

Breakthrough Masqueraded As Revolution: In a recent article, Pieniewski argued that the hype has partly been propagated by AI heavyweights. He claimed that it was them who masqueraded breakthroughs as gigantic revolutions. For example, DeepMind’s AlphaGo Zero that required a massive amount of computational power didn’t demonstrate anything noteworthy except its applicability to the game and also underscored Moravec’s Paradox well.

Researchers Are Debunking Their Own Theories: Last year, University of Toronto’s Geoff Hinton dismissed his own technique — about the back-propagation algorithm having outlived its utility in the AI world. In fact, the well-known computer scientist outlined a bunch of reasons why it is not possible for the brain to do a backprop. Instead, Hinton proposed a different theory — the Capsule method, where a collection of small sets of neurons are categorised into layers for the identification of objects in video or image form. The capsules serve the purpose of identifying a particular feature of an image and recognise it from varying angles.

Corporates May Be Losing Interest: The researcher believes that corporates are gradually losing interest and cited the earlier Facebook shuffle where Yann LeCun stepped down as the Head of AI and took on an advisory role as the chief AI scientist. While research has taken on a more evolutionary slant, we believe corporations are forging deeper ties with institutes and have even set up dedicated labs to harvest university talent.

DL’s Biggest Failure Is Self-Driving Vehicles: The researcher further argues that DL’s biggest failure comes from self-driving vehicles which are yet to see the light of day. Despite the ground-breaking progress of DL technology in autonomous technology, the system is not foolproof as evident from the recent Uber autopilot crash which claimed a life in Arizona. Elon Musk, whose Tesla has also seen several crashes, had hoped for exponential progress in neural networks capabilities to have an accident-free coast-to-coast drive.

DL Cannot Be Scaled: Pieniewski says that the only thing gained from the hype around DL is that its 10k+ dimensional image space has plenty of patterns to generalise across many images and deliver an impression as if the classifiers actually understand what was seen. Another key point highlighted was that reinforcement learning techniques worked successfully on games so far, but are yet to find footing in real-world applications. It also requires massive amount of compute and data to build models.

Yet, Deep Learning Is Seeing Progress In Several Applications

The new tech community is divided into naysayers and researchers who believe that DL is going to be the workhorse for all technical innovations. Despite drawing criticism from various quarters, DL has made tremendous progress, thanks to improvement in hardware, software and algorithm parallelisation that has brought down training time. AI’s leading researchers assert that DNNs, whose results are powered by powerful GPUs, spurred leading companies like Google, Microsoft, Amazon, Intel, Adobe and IBM to initiate R&D projects and leverage ConvNets. A research report from the University of Toronto pointed out that leading semiconductor companies, like NVIDIA, Intel, Samsung and Qualcomm are now making ConvNet chips to power real-time vision applications in self-driving cars, cameras and smartphones.e

What Do You Think?

Join Our Discord Server. Be part of an engaging online community. Join Here.


Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top