The York Ear Model (YEM) and the York 3D Ear Dataset
Hang Dai, Nick Pears, Will Smith, Department of Computer Science, University of York, UK
The York Ear Model (YEM) is a 3D Morphable Model (3DMM) of the human ear. The model construction process consists of initialisation using a 3D data set, and augmentation using a landmarked 2D data set. In particular, the following datasets are used:
Initialisation: we use SYMARE dataset [1] which contains 10 pairs of 3D ear scans.
Augmentation: we use ibug's In-the-wild Ear Database (Collection A - annotated). This dataset contains 605 ear images [2], each of which has 55 annotated landmarks, of which we use 500.
For further details on the model onstruction process, please see our paper:
A Data-augmented 3D Morphable Model of the Ear
H. Dai, N. E. Pears and W. Smith
Proc. 2018 IEEE Int. Conf. Automatic Face and Gesture Recognition, pp 404-408
[DOI]
The 3D Morphable Model (3DMM)
In the image below, the mean shape and the first five principal
components are shown for +3SD (top row) and -3SD (bottom row).
Animation of principal components of shape variation
The 3D Dataset
500 3D ear images are available in both Matlab format (.mat) and PLY format (.ply).
The first 4 images in the dataset are shown below.
Public release of the model and associated 3D dataset (Release date: 27th July 2018)
The ear model and associated 3D dataset are freely available for research and education purposes.
To obtain access to the model and dataset, you need to complete and sign the
user agreement form.
This agreement should be completed by a full-time academic staff member (not a student).
The form should be signed, and emailed to Nick Pears (nick.pears@york.ac.uk).
We will verify your request and contact you on how to download the model package.
To do this we will share a compressed tarball (280MB, .tar.gz) via Google drive so you need a Google account.
The download contains Matlab scripts for viewing both the model and the dataset.
Note that the agreement requires that:
The model and/or dataset are used for non-commercial research and education purposes only.
You agree not copy, sell, trade, or exploit the model and/or dataset, or parts thereof, for any commercial purposes.
In any published research using the models and/or dataset, you cite the following paper:
A Data-augmented 3D Morphable Model of the Ear
H. Dai, N. E. Pears and W. Smith
Proc. 2018 IEEE Int. Conf. Automatic Face and Gesture Recognition, pp 404-408
References
C. T. Jin, P. Guillon, N. Epain, R. Zolfaghari, A. van Schaik, A. I. Tew,
C. Hetherington and J. Thorpe, Creating the sydney york morphological
and acoustic recordings of ears database , IEEE transactions
on multimedia, vol. 16, no. 1, pp. 37-46, Jan. 2014, issn: 1520-9210.
doi: 10.1109/TMM.2013.2282134
Zhou, Y. and Zaferiou, S. Deformable Models of Ears in-the-wild for
Alignment and Recognition. In Automatic Face & Gesture Recognition
(FG 2017), 2017, pp. 626-633
Project sponsors
We thank
Google, via their Faculty Research Awards Programme : Headspace Online (2017-18), with sponsor Forrester Cole.