How does a robot localize itself?

One of the fundamental problems of robotics, localization, can be understood as the robot asking the question ‘Where am I now?’ Given a map of the robot’s surrounding environment and models of the robot’s motion and sensors, determine where the robot is.  Imagine that there is complete darkness in a familiar room and you only have sense of touch.  Touching a familiar object help you localize in the room.  As you leave that familiar object and walk across the floor, your uncertainty grows until your senses and recollection of the room help you localize.   The Kidnapped Robot Problem is similar.  Imagine that a robot with a map and behavioral models is placed in an unknown location with an unknown heading.  How might you program it to localize, i.e. determine it’s current state or ‘pose’.

Monte Carlo Localization (MCL) is an algorithm that begins with a set of random hypotheses about where the robot might be all over the map and in any heading.  As the robot moves and senses, a Darwinian survival-of-the-fittest process tends to multiply the most likely hypotheses and tends to kill off the least likely, gradually evolving a cloud of hypotheses where the center, i.e. average,  is the most likely robot position and heading.  Each of these hypotheses is called a “particle”, as this technique derives from a technique called Particle Filtering.

The algorithm runs like this:

1. Initialize set of randomized particles, in other words hypothesis about the robot’s location.

2. Gather data about the physical environment including sensor information and motion information.

3. Look at each particle and assign a weight based on how well that it fits with the data gathered.

4. Resample a new set of particles according to current particles. Particles with higher weights populate more particles in the next round.

5. Then repeat from step 2.

There are two ways to calculate the final hypothesis of robot pose. One is to calculate the weighted sum of all the particles. The other is taking the particle with the greatest weight as the result.

One hazard of failure in MCL is when the population of hypotheses lacks diversity and misses the mark.  One can always restart the process, but there are improvements to MCL that can strike a balance between explorative diversity of hypotheses and exploitative focus on best hypotheses.  By monitoring the quality of the population of hypotheses, i.e. the particle “weights”, over time, one can introduce greater/fewer new hypotheses to the population.

 

Zuozhi Yang

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s