MITB Banner

Neural Networks Will Soon Be The Lifeline Of Battery-Operated Devices

Share

In an era of digitisation where the entire generation of people rely on electronic and battery-operated devices on their day to day life, the consumption of electric power on large scale is an important aspect to bear in mind. With traditional, non-renewable power generating resources slowly coming to an end, alternate energy resources and optimising products/ services should be given more priority. In the past decade, the field of machine learning has ventured to develop devices to run on extremely low power.

Neural Networks, also known as neural-nets, are the fundamental elements of machine learning algorithms which have paved way in bringing new devices to market which consume almost negligible electric power. The constituents that make up the network process information in parallel. This makes it suitable for creating products that use less power.

With advancements in artificial intelligence, specifically in areas such as face and speech recognition picking up pace in the last few years, implementations with neural networks have become more efficient. One such study by MIT focuses on improvements, with materials such as silicon assisting in creating better electronic chips assuring much more stability and lesser power consumption in neural networks.

MIT Brings More Power

The work by the team of Avishek Biswas and his mentor, Anantha Chandrakasan at Massachusetts Institute of Technology(MIT) showcases the development of improved silicon chips which facilitates quicker processing of neural network computations. It is estimated to be three to seven times faster than the predecessor chips with a drastic decrease in power consumption by 95 percent.  

“The general processor model is that there is a memory in some part of the chip, and there is a processor in another part of the chip, and you move the data back and forth between them when you do these computations” says Avishek, the lead researcher for the project. In addition he quotes that machine-learning (ML) algorithms require computations that necessitate relaying of data back and forth, which is the major chunk of power consumption. To resolve this, he says that the memory in those chips incorporate the dot-product function.

Neural networks are a web of  inter-connected points, commonly known as “nodes”, that act cohesively to perform specific tasks. These nodes also form layers of abstraction,which hides finer details at machine levels. The nodes act as points of data exchange. The receiving node will multiply the data received from the delivering nodes and present it as an output. This is how neural networks work in general. It begins to get “trained” by learning more and more from the data fed into them. The project by MIT implements the dot product to nodes since they act as memory points. The dot product acts on the electric voltages of nodes and reduce the stress of switching between a processor and memory frequently for larger computations. The chip tested worked on 16 nodes at a time.

In the project, weightages of +1 and -1 are assigned to the outputs in nodes, which means that neural-nets follow a binary rule. Either power is consumed or not consumed in the chips according to the rule. The weightage theoretically helps in achieving the best accuracy in the neural-net output.

The Possibilities

These days neural networks form an integral part of handheld devices,especially smartphones. Neural nets will be void in these devices if they are powered incorrectly, not to mention the complex computations it handles. With innovations in electronic components like the one mentioned earlier, it opens up the possibilities of powering devices which incorporate neural networks by using extremely low power. For example, Apple’s A11 Bionic processor has a neural engine that helps with its signature facial recognition feature, FaceID for its smartphone iPhone X. This compensates for higher performance associated with facial recognition. Smartphone manufacturing giant Samsung has also incorporated a similar feature using neural networks and deep learning in its latest Exynos processor chips.

Conclusion:

On an ending note, companies looking at integrating neural networks for their products or services have to definitely consider the additional overhead such as the cost of better electronic components that make up their product in order to utilise the potential of ML.

 

PS: The story was written using a keyboard.
Share
Picture of Abhishek Sharma

Abhishek Sharma

I research and cover latest happenings in data science. My fervent interests are in latest technology and humor/comedy (an odd combination!). When I'm not busy reading on these subjects, you'll find me watching movies or playing badminton.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India