Neural Networks Will Soon Be The Lifeline Of Battery-Operated Devices

In an era of digitisation where the entire generation of people rely on electronic and battery-operated devices on their day to day life, the consumption of electric power on large scale is an important aspect to bear in mind. With traditional, non-renewable power generating resources slowly coming to an end, alternate energy resources and optimising products/ services should be given more priority. In the past decade, the field of machine learning has ventured to develop devices to run on extremely low power.

Neural Networks, also known as neural-nets, are the fundamental elements of machine learning algorithms which have paved way in bringing new devices to market which consume almost negligible electric power. The constituents that make up the network process information in parallel. This makes it suitable for creating products that use less power.

With advancements in artificial intelligence, specifically in areas such as face and speech recognition picking up pace in the last few years, implementations with neural networks have become more efficient. One such study by MIT focuses on improvements, with materials such as silicon assisting in creating better electronic chips assuring much more stability and lesser power consumption in neural networks.

MIT Brings More Power

The work by the team of Avishek Biswas and his mentor, Anantha Chandrakasan at Massachusetts Institute of Technology(MIT) showcases the development of improved silicon chips which facilitates quicker processing of neural network computations. It is estimated to be three to seven times faster than the predecessor chips with a drastic decrease in power consumption by 95 percent.  

“The general processor model is that there is a memory in some part of the chip, and there is a processor in another part of the chip, and you move the data back and forth between them when you do these computations” says Avishek, the lead researcher for the project. In addition he quotes that machine-learning (ML) algorithms require computations that necessitate relaying of data back and forth, which is the major chunk of power consumption. To resolve this, he says that the memory in those chips incorporate the dot-product function.

Neural networks are a web of  inter-connected points, commonly known as “nodes”, that act cohesively to perform specific tasks. These nodes also form layers of abstraction,which hides finer details at machine levels. The nodes act as points of data exchange. The receiving node will multiply the data received from the delivering nodes and present it as an output. This is how neural networks work in general. It begins to get “trained” by learning more and more from the data fed into them. The project by MIT implements the dot product to nodes since they act as memory points. The dot product acts on the electric voltages of nodes and reduce the stress of switching between a processor and memory frequently for larger computations. The chip tested worked on 16 nodes at a time.

In the project, weightages of +1 and -1 are assigned to the outputs in nodes, which means that neural-nets follow a binary rule. Either power is consumed or not consumed in the chips according to the rule. The weightage theoretically helps in achieving the best accuracy in the neural-net output.

The Possibilities

These days neural networks form an integral part of handheld devices,especially smartphones. Neural nets will be void in these devices if they are powered incorrectly, not to mention the complex computations it handles. With innovations in electronic components like the one mentioned earlier, it opens up the possibilities of powering devices which incorporate neural networks by using extremely low power. For example, Apple’s A11 Bionic processor has a neural engine that helps with its signature facial recognition feature, FaceID for its smartphone iPhone X. This compensates for higher performance associated with facial recognition. Smartphone manufacturing giant Samsung has also incorporated a similar feature using neural networks and deep learning in its latest Exynos processor chips.

Conclusion:

On an ending note, companies looking at integrating neural networks for their products or services have to definitely consider the additional overhead such as the cost of better electronic components that make up their product in order to utilise the potential of ML.

 

Download our Mobile App

Abhishek Sharma
I research and cover latest happenings in data science. My fervent interests are in latest technology and humor/comedy (an odd combination!). When I'm not busy reading on these subjects, you'll find me watching movies or playing badminton.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

6 IDEs Built for Rust

Rust IDEs aid efficient code development by offering features like code completion, syntax highlighting, linting, debugging tools, and code refactoring

Can OpenAI Save SoftBank? 

After a tumultuous investment spree with significant losses, will SoftBank’s plans to invest in OpenAI and other AI companies provide the boost it needs?

Oracle’s Grand Multicloud Gamble

“Cloud Should be Open,” says Larry at Oracle CloudWorld 2023, Las Vegas, recollecting his discussions with Microsoft chief Satya Nadella last week.