Introduction To SLAM Algorithm
Nowadays there are self-driving cars that, as their name suggests could drive without the need for a driver or vacuum cleaners such as Roomba which clean the house or a room without the need for assistance. This would not have been possible if they didn't have the perception of the environment they are in. This perception is achieved through a technique called SLAM.
What is SLAM?
SLAM stands for Simultaneous Localization and Mapping and is prevalently used in Robotics to map the surroundings as well as the position of the robot in that surrounding(aka localization).To perform SLAM a robot must have at least 2 two sensors.
First, odometry sensors, it is used to measure the relative positions of the Robot’s State( the most simplistic assumption of the state can be the coordinate difference between the position of the robot which it is currently in and the previous position which is T time constant apart, here T can be 1 sec, 5 min or any frequency on which the sensors senses the change)
And 2nd, the time-of-flight sensor helps the robot map the surrounding, mostly, they are Lidars or depth cameras. They gather information about the surroundings such as how far certain objects are from the robot, for example, how far the walls, chairs, boxes, or any other rigid object are.
Probability!
Before proceeding with the intuition of SLAM, One must understand Probability. Probability(and statistics as well) plays a crucial role in Robotics. It is so because all the sensors are noisy and even if the sensors were precise and accurate, the world around us is not ideal and there will always be uncertainty. For example, you were asked to measure the distance between 2 points with a ruler, you can only be certain up to cm and uncertain if you need more decimal places. You can use more precise instruments but it will still be impossible to have infinite precision and accuracy. So we show our sensor reading in terms of normal distribution using the mean and the deviation of sensor values. Every sensor is provided with its deviation and the mean is the value that it should be outputting. To learn more about probability distribution click here
Terminologies Used in SLAM
- State- In SLAM state means the environment of the robot, it can be people, boxes, or any other obstacles and objects present around the robot which it can sense through sensors. The state can be categorized into static and dynamic.
- The static state remains unchanged such as a wall, tree, or any other immovable object
- Dynamic state refers to a state which might change in the future such as humans, balls, tables, etc
- The state also includes the properties of the robot such as its speed relative to the map, orientation, and even the accuracy of the sensors can be accounted into the State.
- Belief- Belief in a robot means how the robot perceives its environment. It is the internal knowledge of the robot. Belief is the closest estimation of the state because as sensor data have inaccuracy the relative positions of surrounding it provides to the robot will also have inaccuracy. Hence, its perception of the state will be inaccurate. So in other terms, the state is the ideal representation of the environment while belief is the estimated perception of the environment
the first image represents the state of the rover. the second image is what rovers build using Time Of Flight sensors such as lidar
Intuition Behind SLAM
SLAM just as its name suggests means a robot has to simultaneously create a map of the environment and localize itself in that map. So basically, the SLAM algorithm computes the robot’s position and orientation and simultaneously maps the environment.
What SLAM does is that every time stamp is beams of lidar laser or any other time of flight sensor scan the environment, and parallelly it calculates the odometry data of how much the robot has moved, this helps SLAM create a map and gives the coordinates of the robot respect to the map.
Problems faced by SLAM
Problem with Localization
Here are robot estimates of the pose given it has the map of the environment. For example, let's say trees
But due to noise, our robot will be in a different position than what it senses, this phenomenon is known as dead reckoning.
Problem with Mapping
In mapping, it knows its pose but it doesn't know the landmarks or the map of the environment. So it creates the landmarks by finding the distance of different landmarks but again due to noise, its data will diverge.
Problem with SLAM
Here both errors of localization and mapping will be present as seen in the picture. Now if notice for better localization the must have a better map and for a better map robot must have a good pose.
Solution
To solve the problem of having a better estimate of both map and position we introduce probability.
SLAM algorithms by themselves are recursive functions, a recursive function in computer science calls itself again.
A slam takes in the input of the mean and variance of sensors and outputs a probabilistic belief of the environment and uses the output to generate the next belief of the robot.
and hence this is how robots see the world.
with time, sensor inaccuracies will increase and belief of the environment will also get unsure but it will converge after some time. There are some advanced maths used to stabilize the algorithm which will require an article of its own.
![]() |
Screenshot of SLAM performed by ROS here green patch indicates the probability of robot pose and purple represents the probable distance of walls. |
If the slam provides the most recent pose instead of all the poses across all the timestamps then it is an online slam Otherwise it is called a full slam
And overall slam objective is to find the best estimate of the state in which the robot lies
Sensors used for SLAM
Here are some sensors which are used in SLAM
- Encoders are attached to the wheels of the robot and they help to count how much motors rotated
- Gyroscope or IMU sensor is used to sense the change in the orientation of the robot
- Lidar is a highly used time of flight sensor which uses light to detect the distance between the sensor and the object at which the beam is shun
- Depth cameras are a special type of camera that not only outputs the image but also how far away is the object from the camera
- GPS is also a highly used sensor to find the orientation as well as translation motion of the system
0 Comments