Listen to this story
How do animals function so well in the early stages of life without any supervised training data? Humans perhaps spend the most time learning than any other animal. A squirrel, for example, can jump from one branch to another within months of birth.
In neuroscience, “learning” refers to a change in behaviour that results from experience and is long-lasting. The definition differs in the context of AI when we discuss the idea of supervised and unsupervised learning. Here, we will discuss if the innate mechanism in animals is the biggest factor behind their fast learning or if it’s the outcome of supervised/unsupervised learning and how AI can leverage the idea.
Artificial Neural Networks (ANNs) rely centrally on supervised learning for image classification. All the progress to develop ANNs has been made by increasing the volume of input data sets and improving the speed of computation of these data. Arguably, ANNs can mimic an animal’s ability to infer and output data, the amount of labelled data that an animal receives since childbirth is very less when compared to the data ANNs train on.
Subscribe to our Newsletter
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Supervised algorithms require labelled data for training. On the other hand, unsupervised algorithms can possibly exploit large amounts of raw sensory data to generate visual representations.
The raw data received through vision by animals are also bound by implicit or explicit restrictions, which make the classification of images easier and thus cannot be completely labelled as “unsupervised”. For example, a child may be exposed to a large dataset of imagery input, but still encounters a small amount of labelled data and is therefore, with the innate self-supervised mechanism, able to generalise the output.
The neurological network present in nature has evolved a powerful unsupervised algorithm to process this large dataset which if possibly learned, can lay the foundation for development in ANNs.
Innate mechanism vs learned behaviour
In animals, the representation of innate characteristics and learned behaviour is indistinguishable. For example, a monkey shows a preference for faces since first exposure, which reflects the innate mechanism capable of perception of salient features. But simultaneously, by living with familiar faces, the same monkey is able to recognise familiar faces through time and reflect the same preference as it showed when first exposed to a face.
Therefore, since most of the information received by animals is through experience and sensory inputs, the question that arises is whether the machine learning algorithms can wire data in a way that it becomes part of the “innate” mechanism of the AI. This is where the path for Artificial General Intelligence (AGI) can be paved with the help of convolutional neural networks (CNNs) with reinforcement and self-supervised learning algorithms.
Reinforcement learning consists of training an AI system to receive a single numerical value as a reward for the output it generates. How it differs from supervised learning is that there is no pre-decided answer in the input data but the reinforcement agent decides the next steps for the given task. Therefore, it can be argued that evolution is a reinforcement learning mechanism that takes place through generations, as a species of animal “learns” innately by the outcomes its progeny generates.
Animals are not blank slates
We have established that according to neuroscience, animals have an innate structure that provides a base on which learning can occur. In AI research this can be termed meta-learning or inductive biases. This suggests the importance of research in the field of “transfer learning” in ANNs, in which pre-trained connections in one task should be transferred to speed up learning on a different task.
CNNs, which were inspired by the visual cortex, explains that there are infinite possibilities in which the visual world can be translated if trained with unsupervised learning algorithms. CNNs were able to classify an unlabeled set of images based on their content known as solving the image-set clustering problem.
According to AI researchers, self-learning algorithms, apart from mimicking brains, should also encompass the innate mechanisms of living beings. This raises ethical questions regarding the study of animals as well as gives an opportunity to analyse animal behaviour using AI.
Granting general intelligence (common sense) to computers remains one of the biggest endeavours in the field of AI. One of the biggest obstacles when it comes to installing intelligence in AI is the dependency on language. Since in supervised learning, the input data is coupled with information about the data, it acts like the language in which data can be explained.
AI on animals
Researchers are already using AI to monitor animal behaviour. The obtained data can be used to further the progress of the development of AGI.
AI researchers have been comparing the technology’s capabilities with animals’ cognitive ability, general intelligence, problem-solving skills, and ability to synthesise information in experiments like Animal-AI-Olympics. This way animals are serving as a benchmark to measure the ability of AI to handle complex tasks. Though some AI tools like Puribys, AlphaGo, and AlphaZero have been able to outperform animals and humans in various tasks, they possess very little understanding in other tasks.
Since intelligence includes elements like social cognition and behaviour, it cannot be reduced to a simple ration task. Hence, researchers argue in the favour of embodiment-hypothesis as a computer cannot reproduce intelligence as the rich concept that it is.
Humans are not the only beings that infer information with reinforcement and unsupervised learning, animals too learn to do things from an early age. With the recent advent of unsupervised learning into self-supervised learning, combined with CNN on animals, the problem of letting computers infer data by itself seems like a plausible idea.