Capture, Learning, and Synthesis of 3D Speaking Styles. D Cudeiro, T Bolkart, C Laidlaw, A Ranjan, MJ Black.

Date: June 2019. Source: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA. Proceedings Page(s): 10093-10103. Abstract: Audio-driven 3D facial animation has been widely explored, but achieving realistic, human-like performance is still unsolved. This is due to the lack of available 3D datasets, models, and standard evaluation metrics. To address…

Learning to Regress 3D Face Shape and Expression from an Image without 3D Supervision. S Sanyal, T Bolkart, H Feng, MJ Black.

Date: June 2019. Source: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA. Proceedings Page(s): 7755-7764. Abstract: The estimation of 3D face shape from a single image must be robust to variations in lighting, head pose, expression, facial hair, makeup, and occlusions. Robustness requires a large training set of in-the-wild…

Dense 3D Face Decoding Over 2500FPS: Joint Texture and Shape Convolutional Mesh Decoders. Y Zhou, Ji Deng, I Kotsia, S Zafeiriou.

Date: June 2019. Source: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA. Abstract: 3D Morphable Models (3DMMs) are statistical models that represent facial texture and shape variations using a set of linear bases and more particular Principal Component Analysis (PCA). 3DMMs were used as statistical priors for reconstructing 3D…

Expressive Body Capture: 3D Hands, Face, and Body from a Single Image. G Pavlakos, V Choutas, N Ghorbani, T Bolkart, AA Osman, D Tzionas, MJ Black.

Date: June 2019. Source: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA. Proceedings Page(s): 10967-10977. Abstract: To facilitate the analysis of human actions, interactions and emotions, we compute a 3D model of human body pose, hand pose, and facial expression from a single monocular image. To achieve this, we…

3DFaceGAN: Adversarial Nets for 3D Face Representation, Generation, and Translation. S Moschoglou, S Ploumpis, MA Nicolaou et al.

Date: May 2019. Source: International Journal of Computer Vision (2020). https://doi.org/10.1007/s11263-020-01329-8. Abstract: Over the past few years, Generative Adversarial Networks (GANs) have garnered increased interest among researchers in Computer Vision, with applications including, but not limited to, image generation, translation, imputation, and super-resolution. Nevertheless, no GAN-based method has been proposed in the literature that can…

MeshMonk: Open-source large-scale intensive 3D phenotyping. JD White, A Ortega-Castrillón, H Matthews et al.

Date: April 2019. Source: Scientific Reports 9, 6085. https://doi.org/10.1038/s41598-019-42533-y. Abstract: Dense surface registration, commonly used in computer science, could aid the biological sciences in accurate and comprehensive quantification of biological phenotypes. However, few toolboxes exist that are openly available, non-expert friendly, and validated in a way relevant to biologists. Here, we report a customizable toolbox…

AMASS: Archive of Motion Capture as Surface Shapes. N Mahmood , N Ghorbani, NF Troje, G Pons-Moll, MJ Black.

Date: April 2019. Source: Cornell University Library – arXiv.org, Computer Vision and Pattern Recognition. Abstract: Large datasets are the cornerstone of recent advances in computer vision using deep learning. In contrast, existing human motion capture (mocap) datasets are small and the motions limited, hampering progress on learning models of human motion. While there are many…

UV-GAN: Adversarial Facial UV Map Completion for Pose-invariant Face Recognition. J Deng, S Cheng, N Xue, Y Zhou, S Zafeiriou.

UVDB is a dataset developed for training the proposed UV-GAN. It has been built from three different sources with the first subset containing 3,564 subjects (3,564 unique identities with six expressions, 21,384 unique UV maps in total) scanned by the 3dMD device.

Face to face – in realistic 3D. UK experts develop super-realistic animation system.

Date: March 2005. Source: Science X, phys,org. Summary: Computing experts at Cardiff University, UK, are developing a super-realistic animation system that simulates the movements of a face, based on speech. The team in the School of Computer Science has developed highly advanced software which is continually learning the facial dynamics associated with a speaker and…