MITB Banner

Researchers Combine Brain-Like Neurons and FPTT For Faster Neural Nets

The approach can be used to train networks with over 6 million neurons

Share

Listen to this story

A new Nature Machine Intelligence study demonstrated a new approach to training spiking neural networks on a large scale. By combining brain-like neurons with Forward-Propagation Through Time (FPTT), researchers were able to achieve both speed and energy efficiency in their neural networks. The potential applications of this technology are vast, ranging from wearable AI to speech recognition and augmented reality (AR). Furthermore, chips are being developed that can run these programs at very low power.

The research by Bojian Yin and Sander Bohté from the HBP partner Dutch National Research Institute for Mathematics and Computer Science (CWI) is a significant step towards AI. It can be used from speech recognition to local surveillance. 

Spiking neural networks closely mimic the exchange of electrical pulses, but only sparingly. These networks, implemented in chips known as neuromorphic hardware, bring AI programs directly to users’ devices while maintaining privacy. This is particularly relevant in speech recognition for toys and appliances, as well as local surveillance. The algorithm enables learning directly from data, allowing for much larger spiking neural networks to be created.

According to Bohté, neural networks could be trained with up to 10,000 neurons but now, the same can be done for networks with more than 6 million neurons. With this capability networks can be trained like the SPYv4.

The way these networks communicate poses serious challenges. “The algorithms needed for this require a lot of computer memory, allowing us to only train small network models mostly for smaller tasks. This holds back many practical AI applications so far,” said Bohté. 

Yin said the team wanted to develop something closer to the way our brain learns. He made an analogy: when you make a mistake during a driving lesson, you immediately learn from it and adjust your behavior on the spot.

To replicate this process, the researchers developed a neural net in which each individual neuron receives a constantly updated stream of information. This allows the network to adapt and learn in real time, rather than having to store and process all previous information. This approach is a major upgrade from current methods, which require significant computing power and memory. 

By enabling the network to learn and adapt on the fly, the team wants to make machine learning more efficient and energy-efficient, with potential applications in a variety of fields from healthcare to transportation.

Share
Picture of Tasmia Ansari

Tasmia Ansari

Tasmia is a tech journalist at AIM, looking to bring a fresh perspective to emerging technologies and trends in data science, analytics, and artificial intelligence.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.