MITB Banner

How Robots Pick & Roll In Unfamiliar Arenas. Clue: It’s A SLAM Dunk

Share

Until recently, robots have mostly worked in zones where the dimensions of the environment were predefined. Even as robots move out of cages and work in everyday places like warehouses or hospitals, most of them have prior knowledge of their environment and perform a fixed set of tasks.

Now, as robots gain more autonomy and perform tasks in outdoor and indoor environments, they are required to recce their surroundings to function safely and efficiently. Nevertheless, it is not easy for robots to navigate new, complex, and dynamic terrains without pre-existing information.

Here, we take a look at the Simultaneously Localization and Mapping Technology or SLAM Technology that guide autonomous robots in such environments. 

What Is SLAM Technology?

SLAM technology is a computer program for constructing a virtual map of an agent’s surroundings and updating its real-time coordinates. This multi-stage process includes alignment of sensor data using multiple algorithms that use the parallel processing capabilities of Graphics Processing Units (GPUs).

Without prior knowledge of the robot’s location, SLAM technology can collect the spatial information of the agent’s environment and build a map to help robots in navigation. The information is collected using different kinds of sensors. Relatively newer SLAM technology uses cameras and is called Visual SLAM or VSLAM.

While previous technology like GPS could map an agent or human beings, robots can’t use GPS since it cannot be deployed in an indoor setting and are not sufficiently accurate outdoors as the navigation task calls for inch-perfect precision. 

How Does It Work?

The SLAM technology uses localisation methods to place a robot in an environment and create a map to help the robot navigate. The methods come in two forms – relative position measurement and absolute position environment.

In the relative position measurement, SLAM calculates the robot’s position based on wheel rotations or using sensors to gauge inertial measurements such as speed or travelled distance. Sensors like wheel odometers and Inertial Measurement Units (IMUs), aka interoceptive sensors, that measure values internal to the robot, are used for these calculations. However, this method has its limitations as the sensors are prone to errors. 

The absolute positioning measurements use exteroceptive sensors along with cameras and lasers. Exteroceptive sensors collect information from the robot’s environment.

Exteroceptive sensors like acoustic sensors emit sonar waves to calculate the time of flight (ToF). Laser sensors are also used to calculate ToF. However, these sensors are less effective in large-scale environments and open corridors. 

With cameras now being used to capture data, the accuracy of SLAM technologies is top-notch. Monocular cameras are used for cheaper and physically smaller solutions. Stereo cameras (read two cameras) can calculate the third dimension, depth, but to a limited range. Most SLAM systems now use RGB-D cameras to generate 3D maps through structured light or ToF technology, to provide depth information directly.

The combined sensor streams, aka ‘sensor fusions’ give a better estimate of the robot’s movement. Kalman filter algorithms and particle filter algorithms that rely on sequential Monte Carlo methods are used to fse these sensor inputs. 

The maps generated are in the form of 2-D occupancy grids when a limited number of objects are used. With VSLAM, you can track a set of points through successive cameras to triangulate its 3D position. 

What Are Its Applications?

The VSLAM technology is used in augmented reality tasks to project virtual images onto the physical world accurately.

The application is also used in a variety of field robots. Rovers and landers on Mars, drones, autonomous ground vehiclesagricultural robots etc make extensive use of VSLAM technology.

VSLAM, as it becomes more commercially viable, might replace GPS in most applications.

Wrapping Up

SLAM is one of the major innovations to come out in the field of embedded vision. The technology has been a gamechanger in improving the autonomy of robots.

With so many potential applications across sectors, the technology is geared up for great adoption in the coming years. 

Share
Picture of Kashyap Raibagi

Kashyap Raibagi

Kashyap currently works as a Tech Journalist at Analytics India Magazine (AIM). Reach out at kashyap.raibagi@analyticsindiamag.com
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.