Researchers at Technical University, Berlin have proposed a method for folding a deep neural network of arbitrary size into a single neuron with multiple time-delayed feedback loops. The single-neuron deep neural network comprises only a single nonlinearity and appropriately adjusted modulations of the feedback signals. By adjusting the feedback-modulation within the loops, the team adapted the network’s connection weights. The connection weights are determined via a back-propagation algorithm, where both the delay-induced and local network connections are taken into account. The new method, Folded-in-time DNN (Fit-DNN), performed well in a set of benchmark tasks.
Folding-in-time concepts have already been successfully developed for related machine-learning methods called reservoir computing based on fixed recurrent neural networks. The one neuron concept will clearly benefit hardware implementation from a manufacturing standpoint. However, based on other results in the field of so-called time-delay reservoir computing, the researchers also believe their method could reduce the energy consumption of artificial neural networks.
If the time delay between two neurons “located” directly next to each other in time is shortened further, it would theoretically be possible to create a limitless number of neurons, the team posited.