“This is a great opportunity to continue the synergistic virtuous circle’ that has connected neuroscience and AI for decades.”
Artificial Neural networks occasionally get the bad rap for watering down the complexity of how a human brain works with over the top analogies. But, there is no denying the fact that popular algorithms were heavily inspired by how the natural systems work. Now, after three decades of innovation and inventions, AI as a domain has touched many functionalities of human cognition. From attention to memory to dreams, there is an active research space that is burgeoning with every passing day. Now a team of researchers at DeepMind are exploring the possibility of reverse engineering the results of algorithms to know more about cognitive functions. Though this idea is nothing new, the researchers are of the opinion that one particular domain, Deep Reinforcement Learning, hasn’t been thoroughly explored.
Reinforcement learning has already been connected with neural function in a number of ways. Perhaps most impactful has been the link established between phasic dopamine release and the temporal-difference reward-prediction error signal. Read more here.
Now with the resurgence of interest in deep learning catalyzed by recent dramatic advances in machine learning and artificial intelligence, the researchers believe that it is high time Deep RL gets the attention it deserves.
Deep RL is built from components of deep learning and reinforcement learning and leverages the representational power of deep learning to tackle the RL problem.
“If deep RL offered no more than a concatenation of deep learning and RL in their familiar forms, it would be of limited import. But deep RL is more than this; when deep learning and RL are integrated, each triggers new patterns of behaviour in the other, leading to computational phenomena unseen in either deep learning or RL on their own,” wrote the researchers.
In their work, the researchers at DeepMind outline key areas where it appears deep RL may provide leverage for neuroscientific research. Here are a few:
Reward-based representation learning resonates with neuroscience. For example, representations of visual stimuli in the prefrontal cortex depend on which task an animal has been trained to perform, and that effects of task reward on neural responses can be seen even in the primary visual cortex.
Deep RL can build on existing toolkits and provide models of how representations can be shaped by rewards and by task demands.
The challenges within RL can be tackled with the help of unsupervised and self-supervised learning and can be used to yield representations that have the potential to transfer the skill to other tasks when they emerge. Whatever is discussed so far has a lot of similarities with neuroscience, where unsupervised learning and prediction learning have been proposed to shape internal representations. Now, with Deep RL, there is an opportunity to pursue these ideas where the representations they produce support adaptive behaviour.
On Action Outcomes
Model-free and model-based algorithms are two categories of RL. While the former learns a direct mapping from perceptual inputs to action outputs, the latter learns a ‘model’ of action-outcome relationships and uses this to plan actions by forecasting their outcomes. This dichotomy has a great relevance in neuroscience. The functionality in the brain regions is believed to depend on how the two forms of learning may trade-off against one another.
With Deep RL, neuroscientists can avail a new platform, which can be used to observe the relationship between model-free and model-based RL. This helps understand how mechanisms decide from moment to moment, whether behaviour is controlled by model-free or model-based mechanisms and the Deep RL architectures are reminiscent of work from neuroscience on cognitive control mechanisms implemented in the prefrontal cortex.
Arguably one of the most important in neuroscience, deep RL opens up new avenues of novel computational possibilities with regards to memory. It provides a computational setting to investigate how memory can support reward-based learning and decision making.
So far, successful deep RL models relied on experience replay, where the past experiences are stored, and intermittently used alongside new experiences to drive learning. This is similar to the replay events observed in the hippocampus and indeed was inspired by this phenomenon and its suspected role in memory consolidation.
Many deep-RL memory mechanisms are being invented at a rapid rate, including systems that deploy attention and relational processing over the information in memory and systems that combine and coordinate working and episodic memory. The researchers believe that any exchange between deep RL and neuroscience might be actionable and most promising.
On Cognitive Control
As deep RL research has been developed, the problem of attaining competence and switching among multiple tasks or skills garnered more attention, and in this context, a number of computational techniques have been developed which bear an intriguing relationship with neuroscientific models of cognitive control. According to the DeepMind researchers, one such is hierarchical RL, which operates at two levels: shaping a choice among high-level multi-step actions (e.g. ‘make coffee’) and also among actions at a more atomic level (e.g. ‘grind beans’).
Deep RL research has adopted this hierarchical scheme to enable low-level systems to operate autonomously while the higher-level system intervenes only at a cost which makes up part of the RL objective. This arrangement of high- and low-level abstractions is consistent with habit pathways and automatic versus controlled processing, as well as the idea of a ‘cost of control in case of neuroscience’.
Apart from this, the field of social cognition is also on the rise, which investigates the neural underpinnings of social cognition. In the last couple of years, deep RL has entered this space too. It was leveraged to develop methods to train multiple agents in competitive team games and tricky ‘social dilemmas,’ where short-sighted selfish actions must be weighed against cooperative behaviour.
Multi-agent deep RL offers new computational leverage on this area of research in behavioural science, up to and including the neural mechanisms underlying mental models.
Enabling A Virtuous Circle
Although DeepRL seems to be promising, the authors wrote that it is still a work in progress and its implications in neuroscience should be looked at as a great opportunity. For instance, deep RL provides an agent-based framework for studying the way that reward shapes representation, and how representation, in turn, shapes learning and decision making, two issues which together span a large swath of what is most central to neuroscience. The researchers are hopeful about increased engagement in neuroscience with deep RL research there is also the opportunity for neuroscience research to influence deep RL, continuing the synergistic ‘virtuous circle’ that has connected neuroscience and AI for decades.