Date: October 2019.
Source: ACM Transactions on Multimedia Computing, Communications, and Applications, Article No.: 87 https://doi.org/10.1145/3337067.
Abstract: Artificial data synthesis is currently a well-studied topic with useful applications in data science, computer vision, graphics, and many other fields. Generating realistic data is especially challenging, since human perception is highly sensitive to non-realistic appearance. In recent times, new levels of realism have been achieved by advances in GAN training procedures and architectures. These successful models, however, are tuned mostly for use with regularly sampled data such as images, audio, and video. Despite the successful application of the architecture on these types of media, applying the same tools to geometric data poses a far greater challenge. The study of geometric deep learning is still a debated issue within the academic community, as the lack of intrinsic parametrization inherent to geometric objects prohibits the direct use of convolutional filters, a main building block of today’s machine learning systems. In this article, we propose a new method for generating realistic human facial geometries coupled with overlayed textures. We circumvent the parametrization issue by utilizing a specialized non-rigid alignment procedure, and imposing a global mapping from our data to the unit rectangle. This mapping enables the representation of our geometric data as regularly sampled 2D images. We further discuss how to design such a mapping to control the distortion and conserve area within the target image. By representing geometric textures and geometries as images, we are able to use advanced GAN methodologies to generate new plausible textures and geometries. We address the often-neglected topic of relationship between texture and geometry and propose different methods for fitting generated geometries to generated textures. In addition, we widen the scope of our discussion and offer a new method for training GAN models on partially corrupted data. Finally, we provide empirical evidence demonstrating our generative model’s ability to produce examples of new facial identities, independent from the training data, while maintaining a high level of realism—two traits that are often at odds. … Our training data formulation process starts by acquiring digital high resolution facial scans. Using a 3dMD scanner, roughly 1000 different subjects were scanned, each making five distinct facial expressions including a neutral expression. We would also like to thank Intel RealSense group for sharing their data and computational resources with us.
Article: Synthesizing Facial Photometries and Corresponding Geometries Using Generative Adversarial Networks.
Authors: Gil Shamai, Ron Slossberg, Ron Kimmel. Technion – Israel Institute of Technology, Haifa, Israel.