首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A new approach to the generation of a feature-point-driven facial animation is presented. In the proposed approach, a hypothetical face is used to control the animation of a face model. The hypothetical face is constructed by connecting some predefined facial feature points to create a net so that each facet of the net is represented by a Coon's surface. Deformation of the face model is controlled by changing the shape of the hypothetical face, which is performed by changing the locations of feature points and their tangents. Experimental results show that this hypothetical-face-based method can generate facial expressions which are visually almost identical to those of a real face.  相似文献   

2.
This paper provides a comprehensive survey on the techniques for human facial modeling and animation. The survey is carried out from two different perspectives: facial modeling, which concerns how to produce 3D face models, and facial animation, which regards how to synthesize dynamic facial expressions. To generate an individual face model, we can either perform individualization of a generic model or combine face models from an existing face collection. With respect to facial animation, we have further categorized the techniques into simulation-based, performance-driven and shape blend-based approaches. The strength and weakness of these techniques within each category are discussed, alongside with the applications of these techniques to various exploitations. In addition, a brief historical review of the technique evolution is provided. Limitations and future trend are discussed. Conclusions are drawn at the end of the paper.  相似文献   

3.
This paper describes a technique for the automatic adaptation of a canonical facial model to data obtained by a 3D laser scanner. The facial model is a B-spline surface with 13×16 control points. We introduce a technique by which this canonical model is fit to the scanned data and that takes into consideration the requirements for the animation of facial expressions. The animation of facial expressions is based on the facial action coding system (FACS). Using B-splines in combination with FACS, we automatically create the impression of a moving skin. To increase the realism of the animation we map textural information onto the B-spline surface.  相似文献   

4.
基于MPEG-4的人脸表情图像变形研究   总被引:1,自引:0,他引:1       下载免费PDF全文
为了实时地生成自然真实的人脸表情,提出了一种基于MPEG-4人脸动画框架的人脸表情图像变形方法。该方法首先采用face alignment工具提取人脸照片中的88个特征点;接着在此基础上,对标准人脸网格进行校准变形,以进一步生成特定人脸的三角网格;然后根据人脸动画参数(FAP)移动相应的面部关键特征点及其附近的关联特征点,并在移动过程中保证在多个FAP的作用下的人脸三角网格拓扑结构不变;最后对发生形变的所有三角网格区域通过仿射变换进行面部纹理填充,生成了由FAP所定义的人脸表情图像。该方法的输入是一张中性人脸照片和一组人脸动画参数,输出是对应的人脸表情图像。为了实现细微表情动作和虚拟说话人的合成,还设计了一种眼神表情动作和口内细节纹理的生成算法。基于5分制(MOS)的主观评测实验表明,利用该人脸图像变形方法生成的表情脸像自然度得分为3.67。虚拟说话人合成的实验表明,该方法具有很好的实时性,在普通PC机上的平均处理速度为66.67 fps,适用于实时的视频处理和人脸动画的生成。  相似文献   

5.
A real-time speech-driven synthetic talking face provides an effective multimodal communication interface in distributed collaboration environments. Nonverbal gestures such as facial expressions are important to human communication and should be considered by speech-driven face animation systems. In this paper, we present a framework that systematically addresses facial deformation modeling, automatic facial motion analysis, and real-time speech-driven face animation with expression using neural networks. Based on this framework, we learn a quantitative visual representation of the facial deformations, called the motion units (MUs). A facial deformation can be approximated by a linear combination of the MUs weighted by MU parameters (MUPs). We develop an MU-based facial motion tracking algorithm which is used to collect an audio-visual training database. Then, we construct a real-time audio-to-MUP mapping by training a set of neural networks using the collected audio-visual training database. The quantitative evaluation of the mapping shows the effectiveness of the proposed approach. Using the proposed method, we develop the functionality of real-time speech-driven face animation with expressions for the iFACE system. Experimental results show that the synthetic expressive talking face of the iFACE system is comparable with a real face in terms of the effectiveness of their influences on bimodal human emotion perception.  相似文献   

6.
随着CG技术的发展,人的身体形状和动作画像,可以通过电脑生成。人类逐步进入信息社会,人与机器的对话加入人物像后,可以使对话稳定和顺利进行,该文对人物面部表情的仿真,特别是自然、多彩的表情生成,从技术角度做进一步探讨。  相似文献   

7.
8.
利用主动外观模型合成动态人脸表情   总被引:2,自引:2,他引:0  
人脸表情发生变化时,面部纹理也相应地改变;为了方便有效地模拟这一动态的表情变化过程,提出一种基于主动外观模型的人脸表情合成方法.首先离线学习人脸表情与人脸形状和外观参数之间的关系,利用该学习结果实现对输入人脸图像的表情合成;针对合成图像中眼睛和牙齿模糊的缺点,利用合成的眼睛图像和牙齿模板来替代模糊纹理.实验结果表明,该方法能合成不同表情强度和类型的表情图像;合成的眼睛图像不仅增强了表情的真实感,同时也便于实现眼睛的动画.  相似文献   

9.
杨璞  易法令  刘王飞  杨远发 《微机发展》2006,16(11):131-133
人脸是人类相互交流的重要渠道,是人类的喜、怒、哀、乐等复杂表情和语言的载体。因此,具有真实感的三维人脸模型的构造和变形是计算机图形学领域中一个研究热点。如何在三维人脸模型上产生具有真实感的人脸表情和动作,是其中的一个难点。文中介绍了一种基于Delaunay和Dirichlet/Voronoi图的Dirichlet自由变形算法(Dirichlet Free-Form De-formations,简称DFFD)解决这一问题。文中详细介绍了DFFD技术,并根据MPEG-4的脸部定义参数,应用DFFD对一般人脸进行变形。同时提出了在进行人脸变形时利用脸部定义参数FDP与脸部动画参数FAP分层次控制的方法,这种两级控制点控制的设置,使三维人脸模型产生光滑变形,由此可将人脸各种表情平滑准确地展现出来。  相似文献   

10.
To synthesize real-time and realistic facial animation, we present an effective algorithm which combines image- and geometry-based methods for facial animation simulation. Considering the numerous motion units in the expression coding system, we present a novel simplified motion unit based on the basic facial expression, and construct the corresponding basic action for a head model. As image features are difficult to obtain using the performance driven method, we develop an automatic image feature recognition method based on statistical learning, and an expression image semi-automatic labeling method with rotation invariant face detection, which can improve the accuracy and efficiency of expression feature identification and training. After facial animation redirection, each basic action weight needs to be computed and mapped automatically. We apply the blend shape method to construct and train the corresponding expression database according to each basic action, and adopt the least squares method to compute the corresponding control parameters for facial animation. Moreover, there is a pre-integration of diffuse light distribution and specular light distribution based on the physical method, to improve the plausibility and efficiency of facial rendering. Our work provides a simplification of the facial motion unit, an optimization of the statistical training process and recognition process for facial animation, solves the expression parameters, and simulates the subsurface scattering effect in real time. Experimental results indicate that our method is effective and efficient, and suitable for computer animation and interactive applications.  相似文献   

11.
一种参数化的表情映射方法*   总被引:2,自引:0,他引:2  
人脸表情细节的合成是生成高度真实感人脸动画的重要环节,传统的表情映射技术仅考虑了人脸特征点的位置变化,无法处理表情细节;最新的研究考虑了表情纹理细节的合成,但仅能根据已有的表情样本一对一地获取纹理细节,应用范围很有限。有效地结合小波金字塔分解和表情比例图像两种技术,提出了一种进行表情合成的新算法。该算法既可以映射表情纹理细节,又可以生成表情动画,且表情夸张程度可参数化地控制,算法简单有效。  相似文献   

12.
In this paper, we present a system for real-time performance-driven facial animation. With the system, the user can control the facial expression of a digital character by acting out the desired facial action in front of an ordinary camera. First,we create a muscle-based 3D face model. The muscle actuation parameters are used to animate the face model. To increase the reality of facial animation, the orbicularis oris in our face model is divided into the inner part and outer part. We also establish the relationship between jaw rotation and facial surface deformation. Second, a real-time facial tracking method is employed to track the facial features of a performer in the video. Finally, the tracked facial feature points are used to estimate muscle actuation parameters to drive the face model. Experimental results show that our system runs in real time and outputs realistic facial animations.Compared with most existing performance-based facial animation systems, ours does not require facial markers, intrusive lighting,or special scanning equipment, thus it is inexpensive and easy to use.  相似文献   

13.
罗常伟  於俊  汪增福 《自动化学报》2014,40(10):2245-2252
描述了一种实时的视频驱动的人脸动画合成系统.通过该系统,用户只要在摄像头前面表演各种脸部动作,就可以控制虚拟人脸的表情.首先,建立一个基于肌肉的三维人脸模型,并使用肌肉激励参数控制人脸形变.为提高人脸动画的真实感,将口轮匝肌分解为外圈和内圈两部分,同时建立脸部形变与下颌转动的关系.然后,使用一种实时的特征点跟踪算法跟踪视频中人脸的特征点.最后,将视频跟踪结果转换为肌肉激励参数以驱动人脸动画.实验结果表明,该系统能实时运行,合成的动画也具有较强真实感.与大部分现有的视频驱动的人脸动画方法相比,该系统不需要使用脸部标志和三维扫描设备,极大地方便了用户使用.  相似文献   

14.
Human face is a complex biomechanical system and non‐linearity is a remarkable feature of facial expressions. However, in blendshape animation, facial expression space is linearized by regarding linear relationship between blending weights and deformed face geometry. This results in the loss of reality in facial animation. To synthesize more realistic facial animation, aforementioned relationship should be non‐linear to allow the greatest generality and fidelity of facial expressions. Unfortunately, few existing works pay attention to the topic about how to measure the non‐linear relationship. In this paper, we propose an optimization scheme that automatically explores the non‐linear relationship of blendshape facial animation from captured facial expressions. Experiments show that the explored non‐linear relationship is consistent with the non‐linearity of facial expressions soundly and is able to synthesize more realistic facial animation than the linear one.  相似文献   

15.
刘洁  李毅  朱江平 《计算机应用》2021,41(3):839-844
为了生成表情丰富、动作流畅的三维虚拟人动画,提出了一种基于双相机同步捕获面部表情及人体姿态生成三维虚拟人动画的方法。首先,采用传输控制协议(TCP)网络时间戳方法实现双相机时间同步,采用张正友标定法实现双相机空间同步。然后,利用双相机分别采集面部表情和人体姿态。采集面部表情时,提取图像的2D特征点,利用这些2D特征点回归计算得到面部行为编码系统(FACS)面部行为单元,为实现表情动画做准备;以标准头部3D坐标值为基准,根据相机内参,采用高效n点投影(EPnP)算法实现头部姿态估计;之后将面部表情信息和头部姿态估计信息进行匹配。采集人体姿态时,利用遮挡鲁棒姿势图(ORPM)方法计算人体姿态,输出每个骨骼点位置、旋转角度等数据。最后,在虚幻引擎4(UE4)中使用建立的虚拟人体三维模型来展示数据驱动动画的效果。实验结果表明,该方法能够同步捕获面部表情及人体姿态,而且在实验测试中的帧率达到20 fps,能实时生成自然真实的三维动画。  相似文献   

16.
We present an algorithm for generating facial expressions for a continuum of pure and mixed emotions of varying intensity. Based on the observation that in natural interaction among humans, shades of emotion are much more frequently encountered than expressions of basic emotions, a method to generate more than Ekmans six basic emotions (joy, anger, fear, sadness, disgust and surprise) is required. To this end, we have adapted the algorithm proposed by Tsapatsoulis et al. [1] to be applicable to a physics-based facial animation system and a single, integrated emotion model. A physics-based facial animation system was combined with an equally flexible and expressive text-to-speech synthesis system, based upon the same emotion model, to form a talking head capable of expressing non-basic emotions of varying intensities. With a variety of life-like intermediate facial expressions captured as snapshots from the system we demonstrate the appropriateness of our approach.
Hans-Peter SeidelEmail:
  相似文献   

17.
This paper presents a hierarchical multi-state pose-dependent approach for facial feature detection and tracking under varying facial expression and face pose. For effective and efficient representation of feature points, a hybrid representation that integrates Gabor wavelets and gray-level profiles is proposed. To model the spatial relations among feature points, a hierarchical statistical face shape model is proposed to characterize both the global shape of human face and the local structural details of each facial component. Furthermore, multi-state local shape models are introduced to deal with shape variations of some facial components under different facial expressions. During detection and tracking, both facial component states and feature point positions, constrained by the hierarchical face shape model, are dynamically estimated using a switching hypothesized measurements (SHM) model. Experimental results demonstrate that the proposed method accurately and robustly tracks facial features in real time under different facial expressions and face poses.  相似文献   

18.
《Graphical Models》2014,76(3):172-179
We present a performance-based facial animation system capable of running on mobile devices at real-time frame rates. A key component of our system is a novel regression algorithm that accurately infers the facial motion parameters from 2D video frames of an ordinary web camera. Compared with the state-of-the-art facial shape regression algorithm [1], which takes a two-step procedure to track facial animations (i.e., first regressing the 3D positions of facial landmarks, and then computing the head poses and expression coefficients), we directly regress the head poses and expression coefficients. This one-step approach greatly reduces the dimension of the regression target and significantly improves the tracking performance while preserving the tracking accuracy. We further propose to collect the training images of the user under different lighting environments, and make use of the data to learn a user-specific regressor, which can robustly handle lighting changes that frequently occur when using mobile devices.  相似文献   

19.
人脸表情的形变线性拟合方法   总被引:1,自引:0,他引:1  
提出了用于人脸表情合成的形变线性拟合方法. 该方法利用人脸图像形变模型线性组合逼近的基本思想, 确定合成表情图像的形状信息和纹理信息, 其步骤简单, 容易实现. 该方法能有效地从中性表情人脸图像合成出具有表情的图像, 并且得到的人脸表情自然、逼真、具有说服力. 更为重要的是, 该方法能从闭着嘴的中性表情人脸图像合成出具有张开嘴露出牙齿效果的人脸表情图像, 克服了当前大多数人脸表情合成方法不能实现这一效果的不足.  相似文献   

20.
We present techniques for improving performance driven facial animation, emotion recognition, and facial key-point or landmark prediction using learned identity invariant representations. Established approaches to these problems can work well if sufficient examples and labels for a particular identity are available and factors of variation are highly controlled. However, labeled examples of facial expressions, emotions and key-points for new individuals are difficult and costly to obtain. In this paper we improve the ability of techniques to generalize to new and unseen individuals by explicitly modeling previously seen variations related to identity and expression. We use a weakly-supervised approach in which identity labels are used to learn the different factors of variation linked to identity separately from factors related to expression. We show how probabilistic modeling of these sources of variation allows one to learn identity-invariant representations for expressions which can then be used to identity-normalize various procedures for facial expression analysis and animation control. We also show how to extend the widely used techniques of active appearance models and constrained local models through replacing the underlying point distribution models which are typically constructed using principal component analysis with identity–expression factorized representations. We present a wide variety of experiments in which we consistently improve performance on emotion recognition, markerless performance-driven facial animation and facial key-point tracking.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号