首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper provides a comprehensive survey on the techniques for human facial modeling and animation. The survey is carried out from two different perspectives: facial modeling, which concerns how to produce 3D face models, and facial animation, which regards how to synthesize dynamic facial expressions. To generate an individual face model, we can either perform individualization of a generic model or combine face models from an existing face collection. With respect to facial animation, we have further categorized the techniques into simulation-based, performance-driven and shape blend-based approaches. The strength and weakness of these techniques within each category are discussed, alongside with the applications of these techniques to various exploitations. In addition, a brief historical review of the technique evolution is provided. Limitations and future trend are discussed. Conclusions are drawn at the end of the paper.  相似文献   

2.
We present four techniques for modeling and animating faces starting from a set of morph targets. The first technique involves obtaining parameters to control individual facial components and learning the mapping from one type of parameter to another through machine learning techniques. The second technique is to fuse visible speech and facial expressions in the lower part of a face. The third technique combines coarticulation rules and kernel smoothing techniques. Finally, a new 3D tongue model with flexible and intuitive skeleton controls is presented. The results of eight animated character models demonstrate that these techniques are powerful and effective.  相似文献   

3.
真实感虚拟人脸的实现和应用   总被引:2,自引:0,他引:2  
实现了一个交互式人脸建膜和动画的工具,用户可以很容易从一个人的正面和侧面的照片构造出头部的三维模型,并在这个模型上实现特定人脸的表情和动画,同时可以进行口型和声音的同步。基于以上技术,实现了一个虚拟人脸的动画组件,可以应用于WINDOWS应用系统中,给用户提供更加新颖和友好的局面。  相似文献   

4.
根据人脸的生理结构特点,使用分块造型技术进行脸部建模,提出了一个基于"脸型特征"和"五官特征"的参数化肌肉模型,通过模型上控制顶点的属性值来控制脸部形成不同的表情,并利用MAYA MEL语言实现了一个实时交互操作界面.这使得对脸部表情的研究更加方便、灵活,建模及其动画化也更加逼真、实用.  相似文献   

5.
该文基于三维扫描数据抽取特定人面部特征点的三维运动,转化为FAP训练数据。然后通过对获取数据应用独立元分析获得一般人脸动画模式,最终使用ICA参数空间生成任意特定人的面部表情。实验结果表明,ICA比PCA给出更加紧致准确的一般人脸动画表达模式,当两种分量的数目相同时,ICA的重建误差比PCA的重建误差小。表情参数影响动画人脸不同部分的独立性和相关性,改善了不同表情人脸动画的真实性。  相似文献   

6.
Abstract

Creating facial expressions manually needs to be enhanced through a set of easy operating rules, which involves adjectives and the manipulating combinations of facial design elements. This article tries to analyse viewers' cognition of artificial facial expressions in an objective and scientific way. We chose four adjectives – ‘satisfied’, ‘sarcastic’, ‘disdainful’, and ‘nervous’ – as the experimental subjects. The manipulative key factors of facial expressions (eyebrows, eyes, pupils, mouth and head rotation) were used to create permutations and combinations and to make 81 stimuli of different facial expressions with a 3-D face model in order to conduct a survey. Next, we used Quantification Theory Type I to find the best combinations that participants agreed on as representing these adjectives. The conclusions of this research are that: (1) there are differences in adopting facial features between creating artificial characters' expressions and recognising real humans' expressions; (2) using survey and statistics can scientifically analyse viewers' cognition of facial expressions with form changing; and (3) the results of this research can promote designers' efficiency in working with subtler facial expressions.  相似文献   

7.
We present a lightweight non-parametric method to generate wrinkles for 3D facial modeling and animation. The key lightweight feature of the method is that it can generate plausible wrinkles using a single low-cost Kinect camera and one high quality 3D face model with details as the example. Our method works in two stages: (1) offline personalized wrinkled blendshape construction. User-specific expressions are recorded using the RGB-Depth camera, and the wrinkles are generated through example-based synthesis of geometric details. (2) Online 3D facial performance capturing. These reconstructed expressions are used as blendshapes to capture facial animations in real-time. Experiments on a variety of facial performance videos show that our method can produce plausible results, approximating the wrinkles in an accurate way. Furthermore, our technique is low-cost and convenient for common users.  相似文献   

8.
Popular techniques for modeling facial expression usually rely on the shape blending of a series of pre-defined facial models, the use of feature parameters, or the use of an anatomy based facial model. This requires extensive user interaction to construct the pre-defined facial model, the deformation functions, or the anatomy based facial model. Besides, existing anatomy based facial modeling techniques are targeted for human facial model and may not be used directly for non-human like character models. This paper presents an intuitive technique for the design of facial expressions using a physics based deformation approach. The technique does not require specifying the deformation function associated with facial feature parameters, and does not require a detail anatomical model of the head. By adjusting the contraction or relaxation of a set of facial muscles, different facial expressions can be obtained. Facial muscles and skin are assumed to be linearly elastic. The boundary element method (BEM) is adopted for evaluating deformation of the facial skin. This avoids the use of volumetric elements as in the case of finite element method (FEM) or the setting up of complex mass–spring models. Given a polygon mesh of a facial model, a closed volume of the facial mesh is obtained by offsetting the polygon mesh according to a user defined depth map. Each facial muscle is approximated with a series of muscle polygons on the mesh surface. Deformation of the facial mesh is attained by stretching or compressing the muscle polygons. By pre-computing the inverse of the stiffness matrix, interactive editing of facial expression can be achieved.  相似文献   

9.
Igor S. Pandzic   《Graphical Models》2003,65(6):385-404
We propose a method for automatically copying facial motion from one 3D face model to another, while preserving the compliance of the motion to the MPEG-4 Face and Body Animation (FBA) standard. Despite the enormous progress in the field of Facial Animation, producing a new animatable face from scratch is still a tremendous task for an artist. Although many methods exist to animate a face automatically based on procedural methods, these methods still need to be initialized by defining facial regions or similar, and they lack flexibility because the artist can only obtain the facial motion that a particular algorithm offers. Therefore a very common approach is interpolation between key facial expressions, usually called morph targets, containing either speech elements (visemes) or emotional expressions. Following the same approach, the MPEG-4 Facial Animation specification offers a method for interpolation of facial motion from key positions, called Facial Animation Tables, which are essentially morph targets corresponding to all possible motions specified in MPEG-4. The problem of this approach is that the artist needs to create a new set of morph targets for each new face model. In case of MPEG-4 there are 86 morph targets, which is a lot of work to create manually. Our method solves this problem by cloning the morph targets, i.e. by automatically copying the motion of vertices, as well as geometry transforms, from source face to target face while maintaining the regional correspondences and the correct scale of motion. It requires the user only to identify a subset of the MPEG-4 Feature Points in the source and target faces. The scale of the movement is normalized with respect to MPEG-4 normalization units (FAPUs), meaning that the MPEG-4 FBA compliance of the copied motion is preserved. Our method is therefore suitable not only for cloning of free facial expressions, but also of MPEG-4 compatible facial motion, in particular the Facial Animation Tables. We believe that Facial Motion Cloning offers dramatic time saving to artists producing morph targets for facial animation or MPEG-4 Facial Animation Tables.  相似文献   

10.
从正面侧照片合成三维人脸   总被引:6,自引:1,他引:5  
实现了一具交互式人脸建模和动画的工具,用户可以从一个人正面和侧面的照片构造了出头部的三维模型,并基于这个模型实现特定表情和简单的动画。详细阐述在系统实现过程中应用到的人脸几何表示,一般人脸变化到特定人脸、弹性网格、肌肉模型、全视角巾图、表情提取等技术。  相似文献   

11.
1.引言人脸建模与动画(face modeling and animation)是计算机图形学中最富有挑战性的课题之一。这是因为:首先,人脸的几何形状非常复杂,其表面不但具有无数细小的皱纹,而且呈现颜色和纹理的微妙变化,因此建立精确的人脸模型、生成真实感人脸非常困难;其次,脸部运动是骨骼、肌肉、皮下组织和皮肤共同作用的结果,其运动机理非常复杂,因此生成真实感人脸动画非常困难;另外,我们人类生来就具有一种识别和  相似文献   

12.
在人机交互、数字娱乐等领域,传统的表情合成技术难以稳定地生成具有真实感的个性化人脸表情动画.为此,提出一种基于单张图像的三维人脸建模和表情动画系统.自动检测人脸在图像中的位置,自动定位人脸上的关键点,基于这些关键点和形变模型重建个性化三维人脸模型,对重建的人脸模型进行扩展得到完整的人脸网格,采用稀疏关键点控制的动画数据映射方法来驱动重建的人脸模型生成动态表情动画.实验结果表明,该方法稳定性强、自动化程度高,人脸模型与表情动画比较逼真.  相似文献   

13.
A real-time speech-driven synthetic talking face provides an effective multimodal communication interface in distributed collaboration environments. Nonverbal gestures such as facial expressions are important to human communication and should be considered by speech-driven face animation systems. In this paper, we present a framework that systematically addresses facial deformation modeling, automatic facial motion analysis, and real-time speech-driven face animation with expression using neural networks. Based on this framework, we learn a quantitative visual representation of the facial deformations, called the motion units (MUs). A facial deformation can be approximated by a linear combination of the MUs weighted by MU parameters (MUPs). We develop an MU-based facial motion tracking algorithm which is used to collect an audio-visual training database. Then, we construct a real-time audio-to-MUP mapping by training a set of neural networks using the collected audio-visual training database. The quantitative evaluation of the mapping shows the effectiveness of the proposed approach. Using the proposed method, we develop the functionality of real-time speech-driven face animation with expressions for the iFACE system. Experimental results show that the synthetic expressive talking face of the iFACE system is comparable with a real face in terms of the effectiveness of their influences on bimodal human emotion perception.  相似文献   

14.
We present a novel performance‐driven approach to animating cartoon faces starting from pure 2D drawings. A 3D approximate facial model automatically built from front and side view master frames of character drawings is introduced to enable the animated cartoon faces to be viewed from angles different from that in the input video. The expressive mappings are built by artificial neural network (ANN) trained from the examples of the real face in the video and the cartoon facial drawings in the facial expression graph for a specific character. The learned mapping model makes the resultant facial animation to properly get the desired expressiveness, instead of a mere reproduction of the facial actions in the input video sequence. Furthermore, the lit sphere, capturing the lighting in the painting artwork of faces, is utilized to color the cartoon faces in terms of the 3D approximate facial model, reinforcing the hand‐drawn appearance of the resulting facial animation. We made a series of comparative experiments to test the effectiveness of our method by recreating the facial expression in the commercial animation. The comparison results clearly demonstrate the superiority of our method not only in generating high quality cartoon‐style facial expressions, but also in speeding up the animation production of cartoon faces. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
Face alive icon     
In this paper, we propose a methodology to synthesize facial expressions from photographs for devices with limited processing power, network bandwidth and display area, which is referred as “LLL” environment. The facial images are reduced to small-sized face alive icons (FAI). Expressions are decomposed into the expression-unrelated facial features and the expression-related expressional features. As a result, the common features can be identified and reused across expressions using a discrete model constructed from the statistical analysis on training dataset. Semantic synthesis rules are introduced to reveal the inner relations of expressions. Verified by the experimental prototype system and usability study, the approach can produce acceptable facial expression images utilizing much less computing, network and storage resource than the traditional approaches.  相似文献   

16.
童晶  关华勇 《计算机应用》2007,27(4):1013-1016
针对影视动画领域,利用LS_5000型三维激光扫描仪,提出了一套真实感三维人脸快速建模算法。只需输入真实演员人脸的三维扫描点云和未定标的照片,以及极少的人工交互,即可生成虚拟演员真实感的三维人脸模型(包括几何模型、纹理模型和面向动画的可变模型)。实验结果表明,算法输出的模型简洁规范,可直接应用于实际的影视动画制作,提高了人脸模型建模效率。  相似文献   

17.
Expressive facial animations are essential to enhance the realism and the credibility of virtual characters. Parameter‐based animation methods offer a precise control over facial configurations while performance‐based animation benefits from the naturalness of captured human motion. In this paper, we propose an animation system that gathers the advantages of both approaches. By analyzing a database of facial motion, we create the human appearance space. The appearance space provides a coherent and continuous parameterization of human facial movements, while encapsulating the coherence of real facial deformations. We present a method to optimally construct an analogous appearance face for a synthetic character. The link between both appearance spaces makes it possible to retarget facial animation on a synthetic face from a video source. Moreover, the topological characteristics of the appearance space allow us to detect the principal variation patterns of a face and automatically reorganize them on a low‐dimensional control space. The control space acts as an interactive user‐interface to manipulate the facial expressions of any synthetic face. This interface makes it simple and intuitive to generate still facial configurations for keyframe animation, as well as complete temporal sequences of facial movements. The resulting animations combine the flexibility of a parameter‐based system and the realism of real human motion. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
The use of avatars with emotionally expressive faces is potentially highly beneficial to communication in collaborative virtual environments (CVEs), especially when used in a distance learning context. However, little is known about how, or indeed whether, emotions can effectively be transmitted through the medium of a CVE. Given this, an avatar head model with limited but human-like expressive abilities was built, designed to enrich CVE communication. Based on the facial action coding system (FACS), the head was designed to express, in a readily recognisable manner, the six universal emotions. An experiment was conducted to investigate the efficacy of the model. Results indicate that the approach of applying the FACS model to virtual face representations is not guaranteed to work for all expressions of a particular emotion category. However, given appropriate use of the model, emotions can effectively be visualised with a limited number of facial features. A set of exemplar facial expressions is presented.  相似文献   

19.
20.
Image-based animation of facial expressions   总被引:1,自引:0,他引:1  
We present a novel technique for creating realistic facial animations given a small number of real images and a few parameters for the in-between images. This scheme can also be used for reconstructing facial movies where the parameters can be automatically extracted from the images. The in-between images are produced without ever generating a three-dimensional model of the face. Since facial motion due to expressions are not well defined mathematically our approach is based on utilizing image patterns in facial motion. These patterns were revealed by an empirical study which analyzed and compared image motion patterns in facial expressions. The major contribution of this work is showing how parameterized “ideal” motion templates can generate facial movies for different people and different expressions, where the parameters are extracted automatically from the image sequence. To test the quality of the algorithm, image sequences (one of which was taken from a TV news broadcast) were reconstructed, yielding movies hardly distinguishable from the originals. Published online: 2 October 2002 Correspondence to: A. Tal Work has been supported in part by the Israeli Ministry of Industry and Trade, The MOST Consortium  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号