Listen to this story
|
A new Nature Machine Intelligence study demonstrated a new approach to training spiking neural networks on a large scale. By combining brain-like neurons with Forward-Propagation Through Time (FPTT), researchers were able to achieve both speed and energy efficiency in their neural networks. The potential applications of this technology are vast, ranging from wearable AI to speech recognition and augmented reality (AR). Furthermore, chips are being developed that can run these programs at very low power.
The research by Bojian Yin and Sander Bohté from the HBP partner Dutch National Research Institute for Mathematics and Computer Science (CWI) is a significant step towards AI. It can be used from speech recognition to local surveillance.
Spiking neural networks closely mimic the exchange of electrical pulses, but only sparingly. These networks, implemented in chips known as neuromorphic hardware, bring AI programs directly to users’ devices while maintaining privacy. This is particularly relevant in speech recognition for toys and appliances, as well as local surveillance. The algorithm enables learning directly from data, allowing for much larger spiking neural networks to be created.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
According to Bohté, neural networks could be trained with up to 10,000 neurons but now, the same can be done for networks with more than 6 million neurons. With this capability networks can be trained like the SPYv4.

The way these networks communicate poses serious challenges. “The algorithms needed for this require a lot of computer memory, allowing us to only train small network models mostly for smaller tasks. This holds back many practical AI applications so far,” said Bohté.
Yin said the team wanted to develop something closer to the way our brain learns. He made an analogy: when you make a mistake during a driving lesson, you immediately learn from it and adjust your behavior on the spot.
To replicate this process, the researchers developed a neural net in which each individual neuron receives a constantly updated stream of information. This allows the network to adapt and learn in real time, rather than having to store and process all previous information. This approach is a major upgrade from current methods, which require significant computing power and memory.
By enabling the network to learn and adapt on the fly, the team wants to make machine learning more efficient and energy-efficient, with potential applications in a variety of fields from healthcare to transportation.