MITB Banner

Putting Sensors Where It Matters: MIT Researchers’ Algorithm Adds More Flex To Soft Robots

Share

“Strength means resistance not to bending or to other deformations, but to actual breakage, however, the shape may have changed. In the unstiff, living world, putting pressure on a structure normally changes its shape. Push on an ear, and it happily gives. Take your hand away, and it springs back,” said the late Steven Vogel, biomechanics researcher, Duke University, in his article ‘Better Bent Than Broken’. Vogel argued that flexibility, not stiffness, was more in line with nature’s way.

Built on the same concept, scientists are now moving away from metallic robots to more flexible forms, called soft robots — made of highly compliant materials, including nanomaterials. Unlike conventional metallic robots that are more suited for assembly line work, soft robots can have more versatile applications due to their shape-as-you-want feature. 

However, this flexibility presents a unique challenge. While rigid robots operate with a limited degree of freedom, soft robots have infinite-dimensional space. It is a challenge to map this continuum state space when working with a finite set of sensors. To overcome this, a team of researchers from the Massachusetts Institute of Technology has developed a neural network architecture that processes onboard sensor information for optimal task performance.

Soft Robots With Better Sensors

Building soft robots that can perform real-world tasks has been a holy grail in robotics. Unlike metallic robots, which have a finite array of joints, soft robots don’t provide that kind of traceability. Plus, the existing algorithms cannot perform the control mapping and motion planning such robots require.

Since soft robots can take any shape, it’s challenging to design one that can map its body coordinates. In the past, external cameras have been used to triangulate the robot’s position and feed back the information to the control program. 

MIT researchers have now developed a novel neural architecture that optimises sensor placement and the robot’s performance. For this, researchers first divided the robot’s body into regions called particles.

Each of these particles’ rate of strain was given as input to the neural system, and, through trial and error, it learns the most efficient sequence of movements required to complete a task. The network also keeps track of the most often used particles. The lesser-used particles are removed from the set of inputs for the network’s next trials.

By removing insignificant particles and optimising the most important ones, the network can accurately suggest regions on the robot where the sensors could be placed for best results.

“Our model relies purely on intrinsic measurements — specifically, strains and strain rates — and is amenable to physical realisation through off-the-shelf sensors. Since many soft robot representations are nodal in nature, we propose a novel architecture which adopts existing work in point-cloud-based learning and probabilistic sparsification. Our method treats sensor design as the dual of learning, combining physical and digital design in a single end-to-end training process,” said the authors in the paper.

Performance Evaluation

The team measured the performance of their algorithm against a series of expert predictions. For three different soft robot layouts, the team pitted roboticists against their algorithm. The roboticists were asked to manually choose where the sensor’s placement was ideal for specific tasks such as grasping objects. Next, the proposed algorithm was run on the samples.

Image: The figure shows a sample reconstruction of an elephant with the contribution of individual sensors.

The human prediction differed significantly from the algorithm’s selection. The team said the results demonstrated their model ‘vastly outperformed’ humans for each of the tasks.

Wrapping Up

The model offers two main advantages:

  • The model’s latent space offers natural and low-dimensional coordinates to demonstrate a soft robot. It shows interpretable coordinates for representing different aspects of the robot’s motion.
  • The latent feature space learnt during the training could be used as an observer model to control the 2D Biped.

Read the full paper here.

PS: The story was written using a keyboard.
Picture of Shraddha Goled

Shraddha Goled

I am a technology journalist with AIM. I write stories focused on the AI landscape in India and around the world with a special interest in analysing its long term impact on individuals and societies. Reach out to me at shraddha.goled@analyticsindiamag.com.
Related Posts

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories

Featured

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

AIM Conference Calendar

Immerse yourself in AI and business conferences tailored to your role, designed to elevate your performance and empower you to accomplish your organization’s vital objectives. Revel in intimate events that encapsulate the heart and soul of the AI Industry.

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed