首页 | 官方网站   微博 | 高级检索  
     


Improving facial analysis and performance driven animation through disentangling identity and expression
Affiliation:1. Département de génie informatique et génie logiciel École Polytechnique Montréal, Montréal Québec H3T 1J4, Canada;2. Département d’informatique et de recherche opérationnelle Université de Montréal, Montréal, Québec H3C 3J7, Canada;1. Department of Computer Science, University of Bari Aldo Moro, Bari, Italy;2. Big Data Laboratory, National Interuniversity Consortium for Informatics (CINI), Rome, Italy;1. School of Electronic and Computer Engineering, Peking University, Shenzhen 518055, China;2. Peng Cheng Laboratory, Shenzhen 518034, China;1. 135 yaguan Rd, Jinnan District, Tianjin, Tianjin China;2. 393 Binshui West Road, Xiqing District, Tianjin, Tianjin China;3. 230 Zhonghuan West Road, Panyu District, Guangzhou City, Guangdong Province China;1. Atomic and Molecular Physics Division, Harvard-Smithsonian Center for Astrophysics, Cambridge, MA, USA;2. Laboratoire de Météorologie Dynamique, IPSL, CNRS, Sorbonne Université, Ecole normale superieure, PSL Research University, Ecole Polytechnique, Paris F-75005, France;3. Laboratory of Quantum Mechanics of Molecules and Radiative Processes, Tomsk State University, Tomsk 634050, Russia;4. V.E. Zuev Institute of Atmospheric Optics, Laboratory of Theoretical Spectroscopy, Russian Academy of Sciences, 1 Akademichesky Avenue, Tomsk 634055, Russia;5. Hefei National Laboratory for Physical Science at Microscale, University of Science and Technology of China, Hefei, China;6. Independent researcher, Paris, France;7. Faculty of Physics, Hanoi National University of Education, 136 Xuan Thuy, Cau Giay, Hanoi, Vietnam
Abstract:We present techniques for improving performance driven facial animation, emotion recognition, and facial key-point or landmark prediction using learned identity invariant representations. Established approaches to these problems can work well if sufficient examples and labels for a particular identity are available and factors of variation are highly controlled. However, labeled examples of facial expressions, emotions and key-points for new individuals are difficult and costly to obtain. In this paper we improve the ability of techniques to generalize to new and unseen individuals by explicitly modeling previously seen variations related to identity and expression. We use a weakly-supervised approach in which identity labels are used to learn the different factors of variation linked to identity separately from factors related to expression. We show how probabilistic modeling of these sources of variation allows one to learn identity-invariant representations for expressions which can then be used to identity-normalize various procedures for facial expression analysis and animation control. We also show how to extend the widely used techniques of active appearance models and constrained local models through replacing the underlying point distribution models which are typically constructed using principal component analysis with identity–expression factorized representations. We present a wide variety of experiments in which we consistently improve performance on emotion recognition, markerless performance-driven facial animation and facial key-point tracking.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号