VISNAV: Vision-Based Navigation for Mobile Robots

Nick Pears, Bojian Liang, Zezhi Chen

Home | PhD applicants | Research | Publications | Google Scholar | Teaching | Bio

This page describes the vision-based mobile robot navigation research done under an 18-month EPSRC grant, which completed in March 2002. Visual navigation is a challenging application domain of Computer Vision because the robot must infer three dimensional structure using two dimensional images, and because the scene structure and lighting conditions can vary greatly as the robot moves around its environment.

Our approach has been to develop methods to detect and segment the ground plane from the rest of the scene using monocular, uncalibrated vision. Links to publications and video demonstrations are given below.

Video demonstrations

The image below shows our laboratory scene. Notice that two items have been placed in front of the robot. The small box on the right is a true obstacle, whereas the circular piece of paper on the ground can be driven over by the robot.

A short (17 frame only) MPEG (309K) showing the segmented drivable region indicates what the robot believes is part of the ground plane i.e. what can be driven over. It works for any surface type, irrespective of the type of features present in the scene. It correctly removes the obstacle from the segmentation but keeps the flat piece of paper, which is flush to the ground. The horizon line computed by the robot is also shown in the movie.

MPEG results showing the ground plane segmentation system working in a corridor.


MPEG results showing the ground plane segmentation system working with a box shaped obstacle.


Publications

  1. IGR report (40K postscript).
  2. ICIG 2002 paper (1722K gzip postscript).
  3. ICRA 2002 paper (2305K pdf).
  4. IROS 2001 paper (659K pdf).
  5. IARP 2001 paper (950K pdf).

Nick Pears