Statistical 3D Face Shape Estimation from Occluding Contours

Abstract

This paper addresses the problem of 3D face shape approximation from occluding contours, i.e., the boundaries between the facial region and the background. To this end, a linear regression process that models the relationship between a set of 2D occluding contours and a set of 3D vertices is applied onto the corresponding training sets using Partial Least Squares. The result of this step is a regression matrix which is capable of estimating new 3D face point clouds from the out-of-training 2D Cartesian pixel positions of the selected contours. Our approach benefits from the highly correlated spaces spanned by the 3D vertices around the occluding boundaries of a face and their corresponding 2D pixel projections. As a result, the proposed method resembles dense surface shape recovery from missing data. Our technique is evaluated over four scenarios designed to investigate both the influence of the contours included in the training set and the considered number of contours. Qualitative and quantitative experiments demonstrate that using contours outperform the state of the art on the database used in this article. Even using a limited number of contours provides a useful approximation to the 3D face surface.

Publication
In Computer Vision and Image Understanding
Avatar
Will Smith
Professor in Computer Vision