One of the fundamental problems of robotics, localization, can be understood as the robot asking the question ‘Where am I now?’ Given a map of the robot’s surrounding environment and models of the robot’s motion and sensors, determine where the robot is. Imagine that there is complete darkness in a familiar room and you only have sense of touch. Touching a familiar object help you localize in the room. As you leave that familiar object and walk across the floor, your uncertainty grows until your senses and recollection of the room help you localize. The Kidnapped Robot Problem is similar. Imagine that a robot with a map and behavioral models is placed in an unknown location with an unknown heading. How might you program it to localize, i.e. determine it’s current state or ‘pose’.
The algorithm runs like this:
1. Initialize set of randomized particles, in other words hypothesis about the robot’s location.
2. Gather data about the physical environment including sensor information and motion information.
3. Look at each particle and assign a weight based on how well that it fits with the data gathered.
4. Resample a new set of particles according to current particles. Particles with higher weights populate more particles in the next round.
5. Then repeat from step 2.
There are two ways to calculate the final hypothesis of robot pose. One is to calculate the weighted sum of all the particles. The other is taking the particle with the greatest weight as the result.
One hazard of failure in MCL is when the population of hypotheses lacks diversity and misses the mark. One can always restart the process, but there are improvements to MCL that can strike a balance between explorative diversity of hypotheses and exploitative focus on best hypotheses. By monitoring the quality of the population of hypotheses, i.e. the particle “weights”, over time, one can introduce greater/fewer new hypotheses to the population.