Download our Mobile App
Artificial Intelligence (AI) has been one of the most discussed topics in recent times and efforts are being put every day to make it more human. However, the future of AI is uncertain since it is hard to determine the direction AI is heading.
CEO and Cofounder of Robust.AI, Gary Marcus an expert in AI has recently a published a new paper by the name ‘The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence’, which draws attention to a crucial fact about artificial intelligence, i.e., AI is not aware of its own operations and is only functioning as per certain commands within a controlled environment.
The paper consists of 55 pages. It is an expansion of Marcus’ argument against Yoshua Bengio during the 2019 AI debate. In the first chapter of the paper, Marcus sheds light on how the technology currently functions, and it cannot be trusted since one cannot count on AI.
The reason, which Marcus states is that compared to humans, AI is being designed to perform a particular task, and it is not capable of performing another task using the same logic without extensive retraining.
One might contrast robust AI with, for example, narrow intelligence, systems that perform a single narrow goal extremely well (e.g. chess playing or identifying dog breeds) but often in ways that are extremely centred around a single task and not robust and transferable to even modestly different circumstances (e.g. to a board of different size, or from one video game to another with the same logic but different characters and settings).
Cognitive Model Approach
According to Marcus, to take AI to the next level, there is a need for robust intelligence that cannot be achieved until deep understanding is developed, which would involve not just the ability to correlate and discern patterns in datasets but also will be capable at a certain problem from various angles.
Marcus further states that AI has no knowledge of how a world functions compared to that of a human due to which, a cognitive model approach, along with deep learning, is required.
The cognitive approach, continues Marcus, is what makes a human different since it allows a human to collect information from the outside world and then make perception of that information for decision-making.
Cognitive models, believes Marcus, allow a human to correlate with other information available around. A similar kind of logic is used in video games as well, where certain information about the surroundings gets updated periodically as users’ input.
“If our AI systems do not represent and reason over-detailed, structured, internal models of the external world, drawing on substantial knowledge about the world and its dynamics, they will forever resemble GPT-2: they will get some things right, drawing on vast correlative databases, but they won’t understand what’s going on, and we won’t be able to count on them, particularly when real-world circumstances deviate from training data, as they so often do,” wrote Marcus
Abstractness Is The Key
Talking about Google, Marcus lauded how its search engine is one of the most powerful AI that mixes symbol-manipulation operations with deep learning. However, he points out that it cannot be called a superintelligent machine.
To create a robust AI, machine learning enthusiast should not feed the AI with information. Rather, they should try to devise a mechanism that would allow AI to learn new and abstract sets of knowledge.
Factual knowledge can be easily picked by an AI system, but abstract knowledge is what makes it challenging, and this what, believes Marcus, might lead to the inception of a superintelligent machine.
The current state of AI, according to Marcus, is largely trying to memorise the functions of the world at the cost of more and more data. This process needs to be altered with the reasoning that allows drawing inferences from previous encounters. However, reasoning can not be attained without the use of structured representation and individual records.
The role of abstractness might even demand the developers to manually translate a piece of abstract information into formal logic.
An optimistic possibility, added Marcus, is that reasoning may sort itself out, once the prerequisites of hybrid architecture knowledge are better developed; a pessimistic possibility is that researchers may need significant improvements in reasoning per se, at the very least in terms of scalability and the capability to deal with incomplete knowledge.
Marcus firmly believes that to have intelligent machines, we need first to be sure of architecture and knowledge representation-in order.