Now Reading
Key Reason Why Researchers Want AI Systems To Learn Like A Child

Key Reason Why Researchers Want AI Systems To Learn Like A Child


Among the most promising technologies that have generated mass interest and sparked a frenzy in investment and research is artificial intelligence. Despite the initial success, many researchers who are riding high on the successes of machine learning, are pushing the limits of what a naïve AI can do. The biggest breakthrough that the researchers are looking for is how machines can learn like children in a bid to build smarter machines.



NYU computer scientist Gary Marcus has consistently rallied for a more innate machinery in AI — exactly how children learn by interacting with their environment, without relying on a million examples. Babies and toddlers learn how to recognise objects autonomously. This has been a key area of research among cognitive scientists, roboticists and researchers who are working to duplicate the same intuition and representation into AI systems.

For example, computer scientists at DeepMind in London have developed interaction networks — a neural approach to relational reasoning — or a child’s ability to draw logical conclusions which is the key to human intelligence. The two new papers explore the ability of deep neural networks to perform complicated relational reasoning with unstructured data. This is the foundation to develop AI systems that have the flexibility and efficiency of human cognition is giving them a similar ability — to reason about entities and their relations from unstructured data.

According to Demis Hassabis, CEO at DeepMind, the human brain is the only existing proof that can inspire researchers how to build intelligence. And soon neuroscience is serving as a source of inspiration for new types of algorithms.

In a similar vein, generative adversarial networks (GANs) have been hailed as an exemplary breakthrough in Deep Learning. Devised by Ian Goodfellow, a class of algorithms used in unsupervised machine learning application that has been at the forefront of research on generative models. Today, GANs are deployed in computer vision applications for image synthesis, synthetic data generation for visual recognition and visual domain adaptation.

See Also

Why Are The Researchers Impatient For This ‘Learning’ To Start?

  • The 1950 Turing test not only defined a way to test machine’s ability to think like a human but also achieve intelligence to design a machine that was like a child, not an adult. Over the last 15 years, computer scientists and cognitive scientists have been researching in order to understand how children learn information and concepts from their immediate environment so quickly, and how to design a machine that could perhaps do the same.
  • Given how AI is expected to take on tougher tasks that require flexibility and common sense, computer scientists are booting up research and building machines that are close to intuition.
  • With developed economies rushing to embrace robotics — neuroscientists, roboticists and psychologists are researching ways to build machines that can emulate spontaneous development, pick up basic vocabulary and exhibit social behaviour.
  • Human brains are known to be prediction machines with neuroscientists and psychologists believing human actions, such as learning and perception are based on predictive processing, researchers from Indiana University indicate in their research. Similarly, there is a drive to minimise prediction errors and bake basic social abilities in AI systems.
  • The biggest leap AI researchers can take is to get machines to learn like children and the machine reasoning to a point where it can make inferences.
  • AI applications are being improved to allow humans to focus on higher value tasks and augment productivity

Long Road To Intuitive Machines

This ongoing research is part of human-centric AI strategy to replicate human cognition, perception and reasoning. Another area of research that is gaining credence is requiring little data to understand new concepts just like children do without. Researchers like University of Toronto’s Ruslan Salakhutdinov have been working to reduce the learning process by performing a range of experiments. According to researcher Joshua Tenenbaum, children grasp concepts from one example. “But we are far from building machines which are smart like a child,” he was cited in Phys.org. In fact, Data Science labs in renowned universities across the globe are doing significant research in understanding how children learn the language and developing AI algorithms that learn like children.

Why are we far off from making intelligent machines that can learn concepts like children? Alison Gopnik, Professor of Psychology at the Tolman Hall University of California explains that children know and play with objects that have the ability to teach them the most, in other words, they know how to extract the right amount of data by processing just that data. On the other hand, machines have to be fed millions of examples to recognise a single image of a cat or a dog accurately. This brings us to deep learning data problem which has been addressed with techniques like transfer learning.  


Enjoyed this story? Join our Telegram group. And be part of an engaging community.

Provide your comments below

comments

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
Scroll To Top