- In this series, we'll look at several underrated yet fascinating machine learning concepts.
There are some fascinating machine learning topics that aren’t discussed nearly as frequently as they should be. Let’s look at the Dehaene–Changeux model, GeneRec, Leabra, and Wake-sleep algorithm today as part of our “Underrated but intriguing machine learning topics” series.
Dehaene–Changeux model (DCM) is a neural network-programmed computer model of consciousness’ neural correlates. It was created in 1986 by cognitive neuroscientists Stanislas Dehaene and Jean-Pierre Changeux.
S Dehaene states that for planning and decision-making, reward-dependent learning in neural networks is used. According to Jean-Pierre Changeux, it tries to replicate higher cognitive capabilities, including consciousness, decision-making, and central executive functions in the brain. As per J P Changeux and J P Nadal, it has been utilised to give a prediction framework for inattentional blindness research and the Tower of London test solution.
As an explanatory model of the brain’s emergent processes, including consciousness, the Dehaene-Changeux Model contributed to the research of nonlinearity and self-organised criticality in particular. Boly employed the model to show a link between baseline brain activity and somatosensory perception in people. Philips also employed the DCM in a study of the human brain’s default network’s baseline state of awareness.
According to Randall, GeneRec approximates Almeida-Pineda recurrent back propagation and is a generalisation of the recirculation algorithm. Randall C. O’Reilly further said that GeneRec uses symmetric weights to transfer error signals from the output layer to the hidden layer(s) via recurrent activation flow.
The GeneRec algorithm is similar to the Contrastive Hebbian learning algorithm (CHL, also known as the mean field or Deep Boltzmann machine (DBM) learning algorithm), which performs error-driven learning in recurrently connected networks using locally accessible activation variables. As per Ackley, the Boltzmann distribution can be used to characterise the activation states of stochastic networks. Because the GeneRec family of algorithms contains all known methods for executing error-driven learning, it offers a potential foundation for considering how error-driven learning might be implemented in the brain.
Randall C. O’Reilly concludes that because GeneRec requires symmetric weights but is not itself symmetry preserving, there is no single GeneRec algorithm that is exactly comparable to the Almeida-Pineda technique for backpropagation in recurrent networks.
Leabra is a biologically realistic algorithm that is local, error-driven, and associative. It’s a learning model that strikes a compromise between Hebbian and error-driven learning, as well as other network-derived features. This model is significantly impacted by neural network designs and models and contributes to them, as well. When creating a new project in emerging (the successor to PDP++), this approach is the default and is widely used in simulations.
The Leabra framework began with a neural network algorithm that captured the essential computational aspects of neocortex neurobiology, which supports various cognitive tasks. Leabra is based directly on the underlying biology of the brain, with a set of scientifically accurate neural processing mechanisms at its heart, arguably more than any other suggested cognitive architecture. Seth concluded that to justify their usage of error backpropagation models, many researchers use Leabra’s work on error-driven learning. Similarly, there are a plethora of abstract computational implementations of the key ideas behind the PBWM model of the prefrontal cortex/basal ganglia working memory (O’Reilly, Herd, & Pauli), which can take advantage of the PBWM model’s biological linkages.
G. Hinton states that the wake-sleep algorithm is a stochastic multilayer neural network‘s unsupervised learning algorithm. The wake-sleep method aims to learn representations that are simple to describe but allow the input to be reliably rebuilt. As per Geoffrey E, the wake-sleep algorithm is represented graphically as a stack of layers containing data representations. The layers above it represent data from the layer below. There is a recognition weight and a generative weight between each pair of layers, which are taught to increase dependability during the algorithm execution.
The wake-sleep approach is used to train the Helmholtz machine, which is a neural network model. Similarly, the Restricted Boltzmann machine is a form of neural network that uses a conceptually similar approach to train it. Jorg Bornschein suggested that the wake-sleep method is insufficiently powerful for the layers of the inference network to obtain a good estimator of the posterior distribution of latent variables.
Subscribe to our NewsletterGet the latest updates and relevant offers by sharing your email.
Nivash has a doctorate in Information Technology. He has worked as a Research Associate at a University and as a Development Engineer in the IT Industry. He is passionate about data science and machine learning.