Machine learning and artificial intelligence technologies have significantly evolved over the years. Concepts such as neural networks, which were discovered in the 1950s, are now being practically implemented in ML systems. Newer techniques such as backpropagation, support vector machines, deep learning are also coming into the picture in the context of ML.
With all of these developments, the concept of Machine Reasoning has also come into existence. It aligns with the fact that machines are capable of arriving at an appropriate solution for problems based on previous knowledge with the power of reasoning. In fact, this is the basis for AI. However, ML sometimes doesn’t comprehend the way humans do, even though they analyse large amounts of information. Infusing human intuition and reasoning for perfecting AI has always been a challenging task. In this article, we discuss the context of reasoning in AI and ML systems, where pre-set logic and inferences prevail in these intelligent configurations.
The aspect of reasoning in humans is beyond the limitations of logic or inferences made from logic. Reasoning, in simple terms, refers to conclusions derived from inferring from information. For example, when we listen to music, we sense the melody and interpret it in formal terms related to music (notes, lyrics etc.). There is no evidence suggesting why and how this interpretation takes so quickly. The complexity in the brain functioning may span thousands of logical propositions.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
This poses a problem of wholly integrating cognitive tasks into the ML systems. Mathematical and statistical functions alone cannot solve this mammoth task. A vast amount of information pertaining to reasoning should be mapped into the mathematical or statistical functions to achieve a perfect human-like outcome in the system.
Reinforcing Reasoning In A Machine
In a research paper by Léon Bottou, ML researcher known for his work on stochastic gradient algorithms and data compression systems in ML, he says that significant reasoning in ML can be achieved through “algebraically manipulating acquired previously knowledge in order to answer a question”. In the paper titled, From Machine Learning To Machine Reasoning, he elaborates a concept called ‘auxiliary tasks’, which means supporting an ML task with another ML task which has similar inputs. He describes this predicament in the context of image recognition, as:
Download our Mobile App
“Consider the task of identifying persons from face images. Despite the increasing availability of collaborative image tagging schemes, it certainly remains expensive to collect and label millions of training images representing the face of each subject with a good variety of positions and contexts. However, it is easy to collect training data for the slightly different task of telling whether two faces in images represent the same person or not:
- Two faces in the same picture are likely to belong to different persons
- Two faces in successive video frames are likely to belong to the same person
These two tasks have much in common: image analysis primitives, feature extraction, part recognisers trained on the auxiliary task can certainly help solve the original task.”
The papers present a novel design based on this proposition of an auxiliary task. These auxiliary ML systems are even applied to areas such as natural language processing. The performance proved to be faster and more efficient than traditional ML systems when it comes to reasoning.
Furthermore, Bottou elaborates on the reasoning system concept in contrast to statistical models that go in an ML. Reasoning systems differ in terms of predictive capabilities from data and computation power. He also cites other factors in play such as causal reasoning and Newtonian mechanics among others. Bringing all of these together would yield advanced reasoning capabilities and narrow the gap in ML and machine reasoning. The better the reasoning capabilities, the more intelligent the machine is in terms of human cognition.
Recently, researchers at Google’s Deepmind came up with a novel algorithm called relation networks which relies on relational reasoning, the basis of human intelligence. They use these networks on tasks such as visual question answering for CLEVR dataset, and show its capability of responding to the questions with reasoning close to humans (with 95 percent accuracy in results).
With algorithms such as relation networks, reasoning in machines can be boosted dramatically to solve complex real-world problems.
Towards a Sophisticated AI/ML system
All of the facts mentioned above need to be considered for a fully functioning AI systems almost resembling human reasoning. For reasoning to be wholly present in AI, it should have four characteristics.
- Learning: This means, the system has to be fed with information that it can recognise in the future for the task. The system must show the capability of learning on its own.
- Knowledge pool (Storage): All of the information fed to the AI has to be stored for it to refer later. This storage acts as the basis for reasoning.
- Process engine: This forms the processing part of the information related to reasoning. The AI system should provide the ‘reasoned’ solution to unique problems it encounters.
- Problem solving: AI should make the best use of knowledge to solving new and unknown problems. This means AI should act as a dynamic system altogether.
Realistically, It would be wrong to say that ML applications are just a collection of mathematical and statistical techniques. It should also observe and analyse changes in the system in which it has been implemented. This is where machine reasoning helps to a great extent. The above studies have explored to bring in reasoning without the burden of computation in ML and making it even more complex.