Home | PhD applicants | Research | Publications | Google Scholar | Teaching | Bio |
In our 3D face projects, we use 3D cameras (setup instructions), which are based on stereo and structured light, in order to create a 3D 'point cloud' on the face, which is then meshed. Simultaneously a standard 2D image is captured, which is automatically registered with the 3D data, as shown below, although we do not use the 2D data in our face recognition experimentation. Part of our work has been to collect the UoY 3D Face Dataset .
We have developed a 3D facial recognition system that (i) normalises 3D facial pose, (ii) generates a frontal depth map of the face centred on the nose tip (iii) extracts surface features, such as horizonal and vertical gradients, curvatures, sobel edges and so on, (iv) performs linear discriminant analysis (LDA) on both the raw depth map and the surface features to determine the best features and best subspace components of those features for 3D face verification in a subspace-based nearest-neighbour classifier.
BACK to Nick Pears' Research Projects page.