A poetry slam is an event in which artists compete with one another by performing spoken word poetry… Oh, wait, we’re not talking about this kind of slam? Good! Because as much as we love art, we love software and algorithms much more. In this article, we’re delighted to present to you the complex world of SLAM, a highly advanced area of mapping out the terrain and localizing in real-time. To some, it might seem as simple as strapping a camera onto a drone, but when you look into it more in-depth like we tend to do, it becomes a much more complex and fascinating technology. Let us tell you more about what SLAM is, what algorithms can be used in SLAM and what are the differences between them. All with help from our own, in-house SLAM specialists!
What do simultaneous localization and mapping do?
If you don’t know too much about autonomous robotics, highly advanced software, and futuristic optimization technologies, when you hear SLAM, you might think it’s the name of the technology itself. Nothing further from the truth! Simultaneous localization and mapping is a computational problem that needs to be solved, and you solve it using a variety of different algorithms. So, in a way, SLAM doesn’t do anything on its own, it poses a question that we need to solve. And the solutions to the SLAM problem, the potential algorithms, and implementation methods are the aspects that actually “do” something. So what is the problem that SLAM poses? Well, it’s in its name, simultaneous localization and mapping. What does that mean and where can we use it? As the name suggests, simultaneous localization and mapping is a process in which we are able to determine localization (x,y,z) and orientation (yaw, pitch, roll) of a robot, camera or other device in a new area. Simultaneously we're building a map of the area. There are a few key areas in which SLAM is utilized. The most prominent ones are self-driving cars and other autonomous vehicles, including underwater, aerial, and even space vehicles. But SLAM is used even in such mundane areas as our daily lives and the domestic robots we already own. You must have heard about robotic vacuum cleaners and cleaning robots, such as the Roomba IRobot. You very well might even own one! The better models use SLAM to orient themselves in your home, and create a map of it, so that they can clean it more efficiently, without going over the same spot twice. With a properly mapped area, they’re also able to calculate the most effective route for cleaning, saving not only time but also the energy needed to later recharge the autonomous cleaning robot. That’s not all that SLAM can be used for though. According to the official journal of Shanghai Chest Hospital, SLAM technologies have great potential in image-guided surgery applications. Robotic surgery and the use of advanced autonomous robots, are the future of advanced medicine, allowing us a more optimal, less invasive surgery process. If you’d like to read more about the use of SLAM technologies in surgery and medicine, we recommend this fascinating article written by specialists from the Department of Information Engineering of the Marche Polytechnic University in France.
What do simultaneous localization and mapping SLAM software do?
As we already established, SLAM is one of the methods most commonly used for autonomous vehicles that allow us to create a map and localize our vehicle in it at the same time. The process of simultaneous localization and mapping is usually very similar in each case. We take our robot, vehicle, or anything else we want to use to create our map and localize it, and place it in the environment we want to be mapped. The robot starts scanning its surroundings the moment it’s placed in it, and based on the procedurally generated scans, it orients itself in the map that was already created. The robot simultaneously moves, localizes itself, and creates a map of its environment. To make sure that the map that’s being created is correct, the robot constantly compares every new piece of information with the ones it already gathered, making sure that they’re fitting and creating a cohesive whole. For example, if you were to draw a circle with your eyes closed, you’d probably finish it not in the same spot as where the circle starts. When the robot is moving around its environment, it also might not end up in the same spot where it started. Luckily, it’s able to recognize the discrepancy and adjust it, so that the metaphorical circle is completed accurately. Generally speaking, autonomous robots are actually fairly smart and utilize multiple technologies to actively improve their performance.
What is the best SLAM algorithm?
As we already established, SLAM is a problem that needs solving, and we solve it using various algorithms. Some of the most popular solutions of state, that being the position and orientation of the robot as well as their derivatives (speed, acceleration) estimation include:
- The particle filter
- Extended Kalman filter
- Covariance intersection
The Kalman filter and the particle filter are the two filters we’ll be focusing on in today’s article.
What is a Kalman filter in simple terms?
Rudolf Emil Kálmán was a revolutionary engineer and mathematician of Hungarian descent, and nowadays he is most renowned for co-inventing the Kalman filter. The Kalman filter is a form of an algorithm that provides estimates of some unknown variables given the measurements observed over time. It uses a model of a robot motion to predict its next position, which is then combined with the robot’s position measurements. This method of filtering is more accurate than estimation based on just one variable. The Kalman filter has been used in many great endeavors of humankind, including the use of the filter in the Apollo program, as well as in the other NASA space programs. The Kalman filter assumes that the system is linear, and the sensor noise has a Gaussian distribution.
What is a particle filter in SLAM?
The use of a particle filter, also known as sequential Monte Carlo methods, is quite different from the use of the Kalman filter. Both filters are algorithms that actively update an estimated state of an object and find new ways of driving the stochastic process, based on a sequence of observations. The main difference is that the Kalman filter predicts a single new state by linear projections, while the particle filter uses simulation methods to estimate likelihoods of multiple predicted states. Both filters are equally viable and helpful when solving the SLAM problem.
There’s so much more to SLAM than just the two filters we mentioned above. We didn’t even mention LIDAR or SLAM benchmarks yet! Stay tuned for part 2 when we’ll continue the interview with our SLAM specialist, tell you all about particular SLAM methods and show you interesting projects in which SLAM was used. Is there something you’d like to know about this technology? Let us know and we’ll gladly answer any questions you may have. We’re constantly on the lookout for inspiration and new experiences. So what were your experiences with SLAM? Did you partake in an interesting project involving this problem? Let us know, we would love to talk and learn new perspectives. Make sure to follow us to see part two of this article and so much more!