AI has surely taken a big role in the development and research sector and is on its quest to become present in an overwhelming number of scenarios in our daily lives. While we come across AI-based instruments every day in our daily lives (just say “Hey Siri” or “Alexa” if you disagree), we have yet to come across as many robots.
Overcoming this, researchers are on a quest to fuse AI and robotics to create an intelligent body that can make decisions and control a physical body. One of the organisations attempting to undertake this revolutionary task is DeepMind. In a feature with IEEE Spectrum, DeepMind’s head of robotics, Raia Hadsell, discusses the consequences of a ten-year gap between computer vision and robotics development and the organisation’s attempts to bridge the gap.
The Dataset Problem
The biggest challenge faced by these companies is gathering the huge datasets to broaden AI to new categories of applications, answering profound problems or applying AI to drive vehicles, perform basic household tasks, work in various sectors and more. A neural network is only as good as the quality and quantity of its training data; the latest successes owe to enormous datasets recently.
Sign up for your weekly dose of what's up in emerging technology.
For instance, DeepMind’s AlphaGo, which even managed to beat a grandmaster at the ancient board game of Go, was trained on a data set of hundreds of thousands of human games and on the millions of games it played against itself in simulation.
But the problem in AI-Robotics collaboration is the unavailability of such large datasets to train a robot. This is because mistakes can’t be erased as easily in a robot as it does in a neural network. For instance, thousands of games for Go could be simulated in a few minutes by running them parallelly on several CPUs, but the process can’t be repeated more than a few times per minute in a robot when it takes the machine 3 seconds to pick up a cup. Similarly, the neural network can get a million images wrong in the initial stages. Still, if the robot gets walking wrong and falls even a few hundred times, the machine will not be in a condition to continue.
While real-world data is ‘insurmountable’, DeepMind is working to gather the data for robots. DeepMind and other robotics researchers are using a simulation-based technique – sim-to-real – to find a way around the data problem. OpenAI managed to successfully train a robot hand in solving a Rubik’s Cube. Having said this, the simulation technique comes with its major limitations owing to simulations being too perfect and too removed from the complex real world.
The Catastrophic Forgetting Problem
Another profound problem is catastrophic forgetting, where AI tends to forget its old tasks on learning new ones. This arises from the prominent training and classification method for neural networks, where different nodes receive input images and detect the image based on its training data. In the training process, any learning scheme must differentiate between correct and incorrect responses and improve accordingly. Thus, in an instance where you initially train the neural network to differentiate between dogs and cats and further between animals and vehicles, the AI will no longer remember dogs vs cats; it will remember them as animals. Thus, neural programming networks with flexible humanlike intelligence are too complex, preventing them from adapting to the real world.
DeepMind’s Solution Techniques
Elastic Weight Consolidation
Siloing off each skill by training the neural network on one task, saving its network’s weights to its data storage, and training it on a new task, saving those weights elsewhere – is one solution to go around the problem.
Another approach created at DeepMind by Hadsell is “elastic weight consolidation.” According to the approach, upon finishing learning a task, a neural network will assess the important synapse-like connections between the neuron-like nodes for that task. Then, it will partially freeze their weights, which will be protected from change; the other nodes will learn as usual.
“Now, when your Pong-playing AI learns to play Pac-Man, those neurons most relevant to Pong will stay mostly in place, and it will continue to do well enough on Pong. It might not keep winning by a score of 20 to zero, but possibly by 18 to 2,” Hadsell explained to IEEE Spectrum.
Progress and Compress
The case here is that the neurons become more and more inelastic with every new task learned, making it more fixed as time goes on, just like with a human being. But, DeepMind is also working on this problem and believes it is fixable through the “progress and compress” technique. The technique combines three relatively recent ideas; progressive neural networks, knowledge distillation, and elastic weight consolidation. Here, instead of a single neural network training on one task directly followed by another, the neural network tweaks the approach. It works on one task, and after finishing, it freezes its connections, moves the neural network into storage and creates a new neural network for the other task. Since the training of the past task is frozen, it can not be forgotten, and the AI can also bring skills learnt from old connections to new training.
Additionally, to overcome the limitation of a lack of backwards transfer where the model can not transfer skills from new to old, Hadsell applied Geoffrey Hinton, British-Canadian computer scientist’s “knowledge distillation” technique. The technique involves taking different neural networks trained on a task and compressing them into a single one to average their predictions. This concludes the training from several neurons to two:
- The active column – one that learns each new game. This is trained on new tasks in the progress phase. Its connections are added to the knowledge base.
- The knowledge base – one that contains all the learning from previous games, averaged out. The base distils the connections in the compressed phase.
Round-up of Techniques
To overcome the problem of catastrophic forgetting, Hadsell uses “elastic weight consolidation” again. Hadsell’s system avoids the eventual freezing of connections in elastic weight consolidation by having two networks. This method also allows and encourages a large knowledge base since a few frozen layers are not harmful. Finally, the technique allows for the progress-and-compress model owing to the possibility of a smaller active column without the problem of catastrophic forgetting.
The fusing of AI+Robotics is in the long-term hopes for general intelligence. For DeepMind and Hadsell, the plan is for algorithms and robots to learn and cope with different problems in various spheres.