How Robots Pick & Roll In Unfamiliar Arenas. Clue: It’s A SLAM Dunk

Until recently, robots have mostly worked in zones where the dimensions of the environment were predefined. Even as robots move out of cages and work in everyday places like warehouses or hospitals, most of them have prior knowledge of their environment and perform a fixed set of tasks.

Now, as robots gain more autonomy and perform tasks in outdoor and indoor environments, they are required to recce their surroundings to function safely and efficiently. Nevertheless, it is not easy for robots to navigate new, complex, and dynamic terrains without pre-existing information.

Here, we take a look at the Simultaneously Localization and Mapping Technology or SLAM Technology that guide autonomous robots in such environments. 

What Is SLAM Technology?

SLAM technology is a computer program for constructing a virtual map of an agent’s surroundings and updating its real-time coordinates. This multi-stage process includes alignment of sensor data using multiple algorithms that use the parallel processing capabilities of Graphics Processing Units (GPUs).

Without prior knowledge of the robot’s location, SLAM technology can collect the spatial information of the agent’s environment and build a map to help robots in navigation. The information is collected using different kinds of sensors. Relatively newer SLAM technology uses cameras and is called Visual SLAM or VSLAM.

While previous technology like GPS could map an agent or human beings, robots can’t use GPS since it cannot be deployed in an indoor setting and are not sufficiently accurate outdoors as the navigation task calls for inch-perfect precision. 

How Does It Work?

The SLAM technology uses localisation methods to place a robot in an environment and create a map to help the robot navigate. The methods come in two forms – relative position measurement and absolute position environment.

In the relative position measurement, SLAM calculates the robot’s position based on wheel rotations or using sensors to gauge inertial measurements such as speed or travelled distance. Sensors like wheel odometers and Inertial Measurement Units (IMUs), aka interoceptive sensors, that measure values internal to the robot, are used for these calculations. However, this method has its limitations as the sensors are prone to errors. 

The absolute positioning measurements use exteroceptive sensors along with cameras and lasers. Exteroceptive sensors collect information from the robot’s environment.

Exteroceptive sensors like acoustic sensors emit sonar waves to calculate the time of flight (ToF). Laser sensors are also used to calculate ToF. However, these sensors are less effective in large-scale environments and open corridors. 

With cameras now being used to capture data, the accuracy of SLAM technologies is top-notch. Monocular cameras are used for cheaper and physically smaller solutions. Stereo cameras (read two cameras) can calculate the third dimension, depth, but to a limited range. Most SLAM systems now use RGB-D cameras to generate 3D maps through structured light or ToF technology, to provide depth information directly.

The combined sensor streams, aka ‘sensor fusions’ give a better estimate of the robot’s movement. Kalman filter algorithms and particle filter algorithms that rely on sequential Monte Carlo methods are used to fse these sensor inputs. 

The maps generated are in the form of 2-D occupancy grids when a limited number of objects are used. With VSLAM, you can track a set of points through successive cameras to triangulate its 3D position. 

What Are Its Applications?

The VSLAM technology is used in augmented reality tasks to project virtual images onto the physical world accurately.

The application is also used in a variety of field robots. Rovers and landers on Mars, drones, autonomous ground vehiclesagricultural robots etc make extensive use of VSLAM technology.

VSLAM, as it becomes more commercially viable, might replace GPS in most applications.

Wrapping Up

SLAM is one of the major innovations to come out in the field of embedded vision. The technology has been a gamechanger in improving the autonomy of robots.

With so many potential applications across sectors, the technology is geared up for great adoption in the coming years. 

Download our Mobile App

Kashyap Raibagi
Kashyap currently works as a Tech Journalist at Analytics India Magazine (AIM). Reach out at kashyap.raibagi@analyticsindiamag.com

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Can OpenAI Save SoftBank? 

After a tumultuous investment spree with significant losses, will SoftBank’s plans to invest in OpenAI and other AI companies provide the boost it needs?

Oracle’s Grand Multicloud Gamble

“Cloud Should be Open,” says Larry at Oracle CloudWorld 2023, Las Vegas, recollecting his discussions with Microsoft chief Satya Nadella last week.