“Strength means resistance not to bending or to other deformations, but to actual breakage, however, the shape may have changed. In the unstiff, living world, putting pressure on a structure normally changes its shape. Push on an ear, and it happily gives. Take your hand away, and it springs back,” said the late Steven Vogel, biomechanics researcher, Duke University, in his article ‘Better Bent Than Broken’. Vogel argued that flexibility, not stiffness, was more in line with nature’s way.
Built on the same concept, scientists are now moving away from metallic robots to more flexible forms, called soft robots — made of highly compliant materials, including nanomaterials. Unlike conventional metallic robots that are more suited for assembly line work, soft robots can have more versatile applications due to their shape-as-you-want feature.
However, this flexibility presents a unique challenge. While rigid robots operate with a limited degree of freedom, soft robots have infinite-dimensional space. It is a challenge to map this continuum state space when working with a finite set of sensors. To overcome this, a team of researchers from the Massachusetts Institute of Technology has developed a neural network architecture that processes onboard sensor information for optimal task performance.
Soft Robots With Better Sensors
Building soft robots that can perform real-world tasks has been a holy grail in robotics. Unlike metallic robots, which have a finite array of joints, soft robots don’t provide that kind of traceability. Plus, the existing algorithms cannot perform the control mapping and motion planning such robots require.
Since soft robots can take any shape, it’s challenging to design one that can map its body coordinates. In the past, external cameras have been used to triangulate the robot’s position and feed back the information to the control program.
MIT researchers have now developed a novel neural architecture that optimises sensor placement and the robot’s performance. For this, researchers first divided the robot’s body into regions called particles.
Each of these particles’ rate of strain was given as input to the neural system, and, through trial and error, it learns the most efficient sequence of movements required to complete a task. The network also keeps track of the most often used particles. The lesser-used particles are removed from the set of inputs for the network’s next trials.
By removing insignificant particles and optimising the most important ones, the network can accurately suggest regions on the robot where the sensors could be placed for best results.
“Our model relies purely on intrinsic measurements — specifically, strains and strain rates — and is amenable to physical realisation through off-the-shelf sensors. Since many soft robot representations are nodal in nature, we propose a novel architecture which adopts existing work in point-cloud-based learning and probabilistic sparsification. Our method treats sensor design as the dual of learning, combining physical and digital design in a single end-to-end training process,” said the authors in the paper.
The team measured the performance of their algorithm against a series of expert predictions. For three different soft robot layouts, the team pitted roboticists against their algorithm. The roboticists were asked to manually choose where the sensor’s placement was ideal for specific tasks such as grasping objects. Next, the proposed algorithm was run on the samples.
Image: The figure shows a sample reconstruction of an elephant with the contribution of individual sensors.
The human prediction differed significantly from the algorithm’s selection. The team said the results demonstrated their model ‘vastly outperformed’ humans for each of the tasks.
The model offers two main advantages:
- The model’s latent space offers natural and low-dimensional coordinates to demonstrate a soft robot. It shows interpretable coordinates for representing different aspects of the robot’s motion.
- The latent feature space learnt during the training could be used as an observer model to control the 2D Biped.
Read the full paper here.