Date: September 2020.
Source: 3D Imaging, Analysis and Applications. Springer, Cham.
Abstract: With prior knowledge and experience, people can easily observe rich shape and texture variation for a certain type of object, such as human faces, cats or chairs, in both 2D and 3D images. This ability helps us recognise the same person, distinguish different kinds of creatures and sketch unseen samples of the same object class. The process of capturing prior knowledge relating to the normal variations in an object class is mathematically interpreted as statistical modelling. The outcome of such a modelling process is a morphable model, a compact description of an object class that captures the model’s training set shape variance, and thus can act as a useful prior in many Computer Vision applications. Here, we are particularly concerned with 3D shape, and so we refer to the concept of a 3D Morphable Model (3DMM). However, in many applications, it is very important to capture and model the associated texture, where that texture is registered (i.e. aligned) with shape data. Typically, a 3DMM is a vector-space representation of objects that captures the variation of shape and texture. Any convex combination of vectors of a set of object class examples generates a real and valid example in this vector space. Morphable models have many applications in creative media, medical image analysis and biometrics, by providing a useful encoding and prior statistical distribution of both shape and texture. In this chapter, we introduce 3DMMs, provide historical context and recent literature and describe both classical and modern 3DMM construction pipelines that exploit deep learning. We also review the publicly available models that such pipelines generate. Finally, we illustrate the power of 3DMMs via case studies and examples. Throughout the chapter, our exemplar models will be 3DMMs of the human face and head, as these are widely employed in Computer Vision and Graphics literature.

Article: 3D Morphable Models: The Face, Ear and Head.
Authors: H Dai, N Pears, P Huber, WAP Smith, Department of Computer Science University of York, UK.