Home | PhD applicants | Research | Publications | Google Scholar | Teaching | Bio |
This page describes the vision-based mobile robot navigation research done under an 18-month EPSRC grant, which completed in March 2002. Visual navigation is a challenging application domain of Computer Vision because the robot must infer three dimensional structure using two dimensional images, and because the scene structure and lighting conditions can vary greatly as the robot moves around its environment.
Our approach has been to develop methods to detect and segment the ground plane from the rest of the scene using monocular, uncalibrated vision. Links to publications and video demonstrations are given below.
The image below shows our laboratory scene. Notice that two items have been placed in front of the robot. The small box on the right is a true obstacle, whereas the circular piece of paper on the ground can be driven over by the robot.
A short (17 frame only) MPEG (309K) showing the segmented drivable region indicates what the robot believes is part of the ground plane
i.e. what can be driven over. It works for any surface type, irrespective of the type of features
present in the scene. It correctly removes the obstacle from the segmentation
but keeps the flat piece of paper, which is flush to the ground. The horizon line computed
by the robot is also shown in the movie.