‘Retrospective learning’ refers to the assumption that the future is an extension of the past. An intelligent system may learn to name certain objects if it is shown pictures of them along with their names. And a model that employs retrospective learning will be able to recognise and name more pictures of the same objects. Still, it will not be able to name previously unencountered objects.
A paper published earlier this year argued that retrospective learning isn’t a good representation of true intelligence. According to the study–supported by Microsoft Research and DARPA–learning needs to be future-oriented to solve problems in the real world. Accordingly, NI (Natural Intelligence) and AI have to take an unknown future into account. Their internal models have to adapt to naming new objects and using them in a new context. This is called ‘prospective learning.’
Prospective learning is important because many critical problems are novel experiences that come with little information, negligible probability, and high consequences. Unfortunately, such problems precipitate the downfall of AI systems, such as when medical diagnoses systems cannot detect underrepresented diseases in the samples used to train them. Therefore, the challenge with intelligent systems is to distinguish novel experiences, discern the potentially complex ways in which they connect to past experiences, and then act accordingly.
Sign up for your weekly dose of what's up in emerging technology.
An intelligent system that employs prospective learning will be able to make good decisions in unique situations by using past data and coming up with active solutions via an internal model of the world.
The capabilities systems need to possess to work successfully include:
Download our Mobile App
Continual learning (CL)
AI systems tend to forget previously learned information while acquiring new information (a phenomenon called catastrophic interference). This is harmful because previously learned abilities are expected to be useful in the future.
Continual learning involves a model learning continuously and autonomously from a stream of data and adapting itself as new data comes in. In other words, intelligent systems will be able to recollect the aspects of the past that it believes will be useful in the future while also sequentially acquiring new capabilities. Accordingly, continual learning will work if there is both backwards and forward transfer of information.
Constraints, such as built-in priors and inductive biases, shrink the hypothesis space so that the intelligence needs less data and fewer solutions to resolve the current problem (which translates to the generalised resolution of future problems).
These constraints are built into the system of AI and traditionally come in the form of statistical constraints and computational constraints. The former restricts the space of hypotheses to improve statistical efficiency, thereby reducing the amount of data needed to reach a particular goal. The latter seeks to improve computational efficiency by limiting the amount of space and/or time that an intelligent system has to learn and make deductions.
Constraints are necessary because intelligence has finite data, space, and time. Arbitrary slow convergence theorem and the no free lunch theorem further highlight the fact hope for a general AI with the ability to solve all problems efficiently is misplaced.
This instigates an intelligent system to take actions that the AI aims to use in the future rather than in the present. This sort of objective-driven decision-making can be divided into two parts: (1) a goal aimed at maximising rewards; (2) a goal to maximise relevant information. Thus, the AI has to choose between learning about the world and practising behaviour that will be rewarded.
Causal estimation allows the intelligent system to learn the structure of relations, which can help it choose actions to determine specific outcomes. In other words, it enables the AI to identify how one event causes results in another (the effect).
Today, ML has the ammo required for retrospective learning, including statistics, algorithms, and mathematics. But, on the other hand, for ML to successfully master prospective learning, we need to employ prospective learning ourselves to imagine potential futures we haven’t experienced. This will require a much more expansive group of people working on the problem– and perspectives from disciplines such as biology, ecology, and philosophy.