Active Hackathon

Neural Networks Will Soon Be The Lifeline Of Battery-Operated Devices

In an era of digitisation where the entire generation of people rely on electronic and battery-operated devices on their day to day life, the consumption of electric power on large scale is an important aspect to bear in mind. With traditional, non-renewable power generating resources slowly coming to an end, alternate energy resources and optimising products/ services should be given more priority. In the past decade, the field of machine learning has ventured to develop devices to run on extremely low power.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

Neural Networks, also known as neural-nets, are the fundamental elements of machine learning algorithms which have paved way in bringing new devices to market which consume almost negligible electric power. The constituents that make up the network process information in parallel. This makes it suitable for creating products that use less power.

With advancements in artificial intelligence, specifically in areas such as face and speech recognition picking up pace in the last few years, implementations with neural networks have become more efficient. One such study by MIT focuses on improvements, with materials such as silicon assisting in creating better electronic chips assuring much more stability and lesser power consumption in neural networks.

MIT Brings More Power

The work by the team of Avishek Biswas and his mentor, Anantha Chandrakasan at Massachusetts Institute of Technology(MIT) showcases the development of improved silicon chips which facilitates quicker processing of neural network computations. It is estimated to be three to seven times faster than the predecessor chips with a drastic decrease in power consumption by 95 percent.  

“The general processor model is that there is a memory in some part of the chip, and there is a processor in another part of the chip, and you move the data back and forth between them when you do these computations” says Avishek, the lead researcher for the project. In addition he quotes that machine-learning (ML) algorithms require computations that necessitate relaying of data back and forth, which is the major chunk of power consumption. To resolve this, he says that the memory in those chips incorporate the dot-product function.

Neural networks are a web of  inter-connected points, commonly known as “nodes”, that act cohesively to perform specific tasks. These nodes also form layers of abstraction,which hides finer details at machine levels. The nodes act as points of data exchange. The receiving node will multiply the data received from the delivering nodes and present it as an output. This is how neural networks work in general. It begins to get “trained” by learning more and more from the data fed into them. The project by MIT implements the dot product to nodes since they act as memory points. The dot product acts on the electric voltages of nodes and reduce the stress of switching between a processor and memory frequently for larger computations. The chip tested worked on 16 nodes at a time.

In the project, weightages of +1 and -1 are assigned to the outputs in nodes, which means that neural-nets follow a binary rule. Either power is consumed or not consumed in the chips according to the rule. The weightage theoretically helps in achieving the best accuracy in the neural-net output.

The Possibilities

These days neural networks form an integral part of handheld devices,especially smartphones. Neural nets will be void in these devices if they are powered incorrectly, not to mention the complex computations it handles. With innovations in electronic components like the one mentioned earlier, it opens up the possibilities of powering devices which incorporate neural networks by using extremely low power. For example, Apple’s A11 Bionic processor has a neural engine that helps with its signature facial recognition feature, FaceID for its smartphone iPhone X. This compensates for higher performance associated with facial recognition. Smartphone manufacturing giant Samsung has also incorporated a similar feature using neural networks and deep learning in its latest Exynos processor chips.

Conclusion:

On an ending note, companies looking at integrating neural networks for their products or services have to definitely consider the additional overhead such as the cost of better electronic components that make up their product in order to utilise the potential of ML.

 

More Great AIM Stories

Abhishek Sharma
I research and cover latest happenings in data science. My fervent interests are in latest technology and humor/comedy (an odd combination!). When I'm not busy reading on these subjects, you'll find me watching movies or playing badminton.

Our Upcoming Events

Conference, in-person (Bangalore)
Machine Learning Developers Summit (MLDS) 2023
19-20th Jan, 2023

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
21st Apr, 2023

Conference, in-person (Bangalore)
MachineCon 2023
23rd Jun, 2023

3 Ways to Join our Community

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Telegram Channel

Discover special offers, top stories, upcoming events, and more.

Subscribe to our newsletter

Get the latest updates from AIM