首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 937 毫秒
1.
为合成真实感人脸动画,提出一种实时的基于单摄像头的3D虚拟人脸动画方法.首先根据用户的单张正面人脸图像重建用户的3D人脸模型,并基于该人脸模型合成姿态、光照和表情变化的人脸图像,利用这些图像训练特定用户的局部纹理模型;然后使用摄像头拍摄人脸视频,利用特定用户的局部纹理模型跟踪人脸特征点;最后由跟踪结果和3D关键形状估计Blendshape系数,通过Blendshape模型合成的人脸动画.实验结果表明,该方法能实时合成真实感3D人脸动画,且只需要一个普通的摄像头,非常适合普通用户使用.  相似文献   

2.
Kinect驱动的人脸动画合成技术研究   总被引:1,自引:0,他引:1  
三维人脸动画合成技术可以应用于虚拟现实、角色控制等多个领域。为此,提出一种基于Kinect的人脸动画合成方法。人脸跟踪客户端利用Kinect对用户的脸部表情进行跟踪识别,得到人脸表情动画参数,通过socket发送给人脸动画合成服务器,由人脸动画合成服务器查找基于MPEG-4标准的人脸动画定义表,控制人脸模型的变形,从而实时合成与用户表情相对应的三维人脸动画。实验结果表明,该方法能够在满足实时性要求的条件下合成高质量的三维人脸动画,同现有技术相比,结构简单、容易部署且具有较好的可扩展性。  相似文献   

3.
利用双线性分析的三维人脸表情模拟技术   总被引:1,自引:0,他引:1  
具有真实感的三维人脸表情合成是计算机应用领域的一个热点问题.提出一种基于双线性分析的三维人脸表情生成方法.在对人脸数据进行分区域统计分析的基础上,建立了表情与身份相独立的双线性统计模型.设计了该模型的肌肉驱动方法,通过肌肉参数来驱动相应表情统计参数变化来生成丰富表情.对于输入的特定二维或三维人脸,利用形变模型(Morphable Model), 可自动实现其模型匹配.实验结果表明,该方法能够模拟各种具有较高真实感的人脸表情.  相似文献   

4.
具有真实感的三维人脸动画   总被引:10,自引:0,他引:10       下载免费PDF全文
张青山  陈国良 《软件学报》2003,14(3):643-650
具有真实感的三维人脸模型的构造和动画是计算机图形学领域中一个重要的研究课题.如何在三维人脸模型上实时地模拟人脸的运动,产生具有真实感的人脸表情和动作,是其中的一个难点.提出一种实时的三维人脸动画方法,该方法将人脸模型划分成若干个运动相对独立的功能区,然后使用提出的基于加权狄里克利自由变形DFFD(Dirichlet free-form deformation)和刚体运动模拟的混合技术模拟功能区的运动.同时,通过交叉的运动控制点模拟功能区之间运动的相互影响.在该方法中,人脸模型的运动通过移动控制点来驱动.为了简化人脸模型的驱动,提出了基于MPEG-4中脸部动画参数FAP(facial animation parameters)流和基于肌肉模型的两种高层驱动方法.这两种方法不但具有较高的真实感,而且具有良好的计算性能,能实时模拟真实人脸的表情和动作.  相似文献   

5.
三维人脸恢复是视觉交互的一个难点问题,提出了一种从视频中实时恢复三维人脸的新方法.该方法利用主动形状模型进行人脸特征点提取和跟踪,确保了三维形状恢复和特征跟踪的有效性和一致性;采用非刚体形状和运动估计方法构建三维形变基,有效地适应人脸形状变化的多样性;采用非线性优化算法估算人脸姿态和三维形变基参数,实现了三维人脸形状和姿态的实时恢复.实验结果表明,该方法不仅能从视频中实时恢复三维人脸模型,而且可有效跟踪人脸各种姿态的变化.  相似文献   

6.
描述一种3维人脸模型的单视频驱动方法。该方法在传统肌肉模型的基础上,根据嘴部运动特性采用机构学原理建立嘴部运动控制模型,通过视频图像序列跟踪得到特征点的运动规律曲线,进而驱动眼部、嘴部及面部其他部分的网格点运动,产生具有真实感的面部表情动作。仿真结果表明,采用此方法可以得到逼真的人脸表情模拟动画。  相似文献   

7.
论文提出了一种新的基于三维人脸形变模型,并兼容于MPEG-4的三维人脸动画模型。采用基于均匀网格重采样的方法建立原型三维人脸之间的对齐,应用MPEG-4中定义的三维人脸动画规则,驱动三维模型自动生成真实感人脸动画。给定一幅人脸图像,三维人脸动画模型可自动重建其真实感的三维人脸,并根据FAP参数驱动模型自动生成人脸动画。  相似文献   

8.
基于平面曲面化的三维人脸建模方法   总被引:1,自引:1,他引:0  
通过一幅正面人脸照片或者两副正交人脸照片得到人脸的参数,采用基于照片的特征点及轮廓参数提取的方法建立与原型三维人脸模型之间的匹配对齐,应用MPEG-4中定义的三维人脸参数,驱动三维模型生成真实感人脸,提出了一种平面曲面化人脸形变模型.结果证实该方法计算量大大减少,该方法不仅可以进行有效的真实感三维人脸建模,而且变形简单流畅,具有广阔的应用前景.  相似文献   

9.
为了提高计算机合成人脸表情动画的后期制作效率,提出一种基于时空的人脸表情动画编辑方法.首先使用基于拉普拉斯的网格变形技术将用户的编辑效果在空间域传播到整个人脸模型,很好地保留了中性人脸模型的几何细节特征,从而提高合成表情的真实感;然后使用高斯函数将用户编辑效果在时间域传播到邻近表情序列,保持人脸表情动画的平滑过渡,所合成人脸表情动画与初始给定数据保持一致.该方法为用户提供了人脸表情动画编辑的局部控制,可由用户指定编辑的影响范围,使得编辑效果在指定范围内自然传播.实验结果表明,文中方法所合成人脸表情动画自然、真实,有效地提高了数据驱动人脸表情动画的编辑效率.  相似文献   

10.
面向纹理特征的真实感三维人脸动画方法   总被引:2,自引:0,他引:2  
纹理变化是人脸表情的重要组成部分,传统的人脸动画方法通常只是对纹理图像做简单的拉伸变换,没有考虑人脸细微纹理特征的变化,比如皱纹、酒窝等,该文提出了一种面向纹理特征变化的真实感三维人脸动画方法.给出局部表情比率图(Partial Expression Ratio Image,PERI)的概念及其获取方法,在此基础上,进一步给出了面向MPEG-4的PERI参数化与面向三维人脸动画的多方向PERI方法,前者通过有机结合MPEG-4的人脸动画参数(Facial Anlmation Parameter,FAP),实现人脸动画中细微表情特征的参数化表示;后者通过多方向PERI纹理特征调整方法,使得三维人脸模型在不同角度都具有较好的细微表情特征,该文提出的方法克服了传统人脸动画只考虑人脸曲面形变控制而忽略纹理变化的缺陷,实现面向纹理变化的具有细微表情特征的真实感三维人脸动画,实验表明,该文提出的方法能有效捕捉纹理变化细节,提高人脸动画的真实感。  相似文献   

11.
In this paper, we present a system for real-time performance-driven facial animation. With the system, the user can control the facial expression of a digital character by acting out the desired facial action in front of an ordinary camera. First,we create a muscle-based 3D face model. The muscle actuation parameters are used to animate the face model. To increase the reality of facial animation, the orbicularis oris in our face model is divided into the inner part and outer part. We also establish the relationship between jaw rotation and facial surface deformation. Second, a real-time facial tracking method is employed to track the facial features of a performer in the video. Finally, the tracked facial feature points are used to estimate muscle actuation parameters to drive the face model. Experimental results show that our system runs in real time and outputs realistic facial animations.Compared with most existing performance-based facial animation systems, ours does not require facial markers, intrusive lighting,or special scanning equipment, thus it is inexpensive and easy to use.  相似文献   

12.
三维人脸表情动画技术是一个具有巨大应用前景和研究意义的学科方向。在研究 现有表情捕捉和动画合成技术的基础上,提出了一种基于Microsoft Kinect 设备的人脸表情动画 系统。该系统首先利用Kinect 设备捕捉人脸并提取相关表情参数,同时利用Autodesk Maya 动 画软件建立对应的人脸三维模型,之后将模型导入到OGRE 动画引擎并把提取的表情参数传递 给OGRE,渲染生成实时人脸表情动画。实验结果表明该技术方案可行,实时表情动画效果达 到实际应用水平,相比于现有其他表情动画技术,系统采用通用的Kinect 设备,成本更大众化 且更容易进行操作。  相似文献   

13.
A real-time speech-driven synthetic talking face provides an effective multimodal communication interface in distributed collaboration environments. Nonverbal gestures such as facial expressions are important to human communication and should be considered by speech-driven face animation systems. In this paper, we present a framework that systematically addresses facial deformation modeling, automatic facial motion analysis, and real-time speech-driven face animation with expression using neural networks. Based on this framework, we learn a quantitative visual representation of the facial deformations, called the motion units (MUs). A facial deformation can be approximated by a linear combination of the MUs weighted by MU parameters (MUPs). We develop an MU-based facial motion tracking algorithm which is used to collect an audio-visual training database. Then, we construct a real-time audio-to-MUP mapping by training a set of neural networks using the collected audio-visual training database. The quantitative evaluation of the mapping shows the effectiveness of the proposed approach. Using the proposed method, we develop the functionality of real-time speech-driven face animation with expressions for the iFACE system. Experimental results show that the synthetic expressive talking face of the iFACE system is comparable with a real face in terms of the effectiveness of their influences on bimodal human emotion perception.  相似文献   

14.
Animating a complex human face model in real-time is not a trivial task in intelligent multimedia systems for next generation environments. This paper proposes a generation scheme of a simplified model for real-time human face animation in intelligent multimedia systems. Previous work mainly focused on the geometric features when generating a simplified human face model. Such methods may lose the critical feature points for animating human faces. The proposed method can find those important feature points and can generate the feature-preserved low-level models busing our new quadrics. The new quadrics consist of basic error metrics and feature edge quadrics. The quality of facial animation with a lower-level model is as good as that of a computationally expansive original model. In this paper, we prove that our decimated facial model is effective in facial animation using a well-known expression-retargeting technique.  相似文献   

15.
We describe a system to synthesize facial expressions by editing captured performances. For this purpose, we use the actuation of expression muscles to control facial expressions. We note that there have been numerous algorithms already developed for editing gross body motion. While the joint angle has direct effect on the configuration of the gross body, the muscle actuation has to go through a complicated mechanism to produce facial expressions. Therefore,we devote a significant part of this paper to establishing the relationship between muscle actuation and facial surface deformation. We model the skin surface using the finite element method to simulate the deformation caused by expression muscles. Then, we implement the inverse relationship, muscle actuation parameter estimation, to find the muscle actuation values from the trajectories of the markers on the performer's face. Once the forward and inverse relationships are established, retargeting or editing a performance becomes an easy job. We apply the original performance data to different facial models with equivalent muscle structures, to produce similar expressions. We also produce novel expressions by deforming the original data curves of muscle actuation to satisfy the key‐frame constraints imposed by animators.Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

16.
To synthesize real-time and realistic facial animation, we present an effective algorithm which combines image- and geometry-based methods for facial animation simulation. Considering the numerous motion units in the expression coding system, we present a novel simplified motion unit based on the basic facial expression, and construct the corresponding basic action for a head model. As image features are difficult to obtain using the performance driven method, we develop an automatic image feature recognition method based on statistical learning, and an expression image semi-automatic labeling method with rotation invariant face detection, which can improve the accuracy and efficiency of expression feature identification and training. After facial animation redirection, each basic action weight needs to be computed and mapped automatically. We apply the blend shape method to construct and train the corresponding expression database according to each basic action, and adopt the least squares method to compute the corresponding control parameters for facial animation. Moreover, there is a pre-integration of diffuse light distribution and specular light distribution based on the physical method, to improve the plausibility and efficiency of facial rendering. Our work provides a simplification of the facial motion unit, an optimization of the statistical training process and recognition process for facial animation, solves the expression parameters, and simulates the subsurface scattering effect in real time. Experimental results indicate that our method is effective and efficient, and suitable for computer animation and interactive applications.  相似文献   

17.
Oftentimes facial animation is created separately from overall body motion. Since convincing facial animation is challenging enough in itself, artists tend to create and edit the face motion in isolation. Or if the face animation is derived from motion capture, this is typically performed in a mo‐cap booth while sitting relatively still. In either case, recombining the isolated face animation with body and head motion is non‐trivial and often results in an uncanny result if the body dynamics are not properly reflected on the face (e.g. the bouncing of facial tissue when running). We tackle this problem by introducing a simple and intuitive system that allows to add physics to facial blendshape animation. Unlike previous methods that try to add physics to face rigs, our method preserves the original facial animation as closely as possible. To this end, we present a novel simulation framework that uses the original animation as per‐frame rest‐poses without adding spurious forces. As a result, in the absence of any external forces or rigid head motion, the facial performance will exactly match the artist‐created blendshape animation. In addition we propose the concept of blendmaterials to give artists an intuitive means to account for changing material properties due to muscle activation. This system allows to automatically combine facial animation and head motion such that they are consistent, while preserving the original animation as closely as possible. The system is easy to use and readily integrates with existing animation pipelines.  相似文献   

18.
1.引言人脸建模与动画(face modeling and animation)是计算机图形学中最富有挑战性的课题之一。这是因为:首先,人脸的几何形状非常复杂,其表面不但具有无数细小的皱纹,而且呈现颜色和纹理的微妙变化,因此建立精确的人脸模型、生成真实感人脸非常困难;其次,脸部运动是骨骼、肌肉、皮下组织和皮肤共同作用的结果,其运动机理非常复杂,因此生成真实感人脸动画非常困难;另外,我们人类生来就具有一种识别和  相似文献   

19.
Three-dimensional (3D) cartoon facial animation is one step further than the challenging 3D caricaturing which generates 3D still caricatures only. In this paper, a 3D cartoon facial animation system is developed for a subject given only a single frontal face image of a neutral expression. The system is composed of three steps consisting of 3D cartoon face exaggeration, texture processing, and 3D cartoon facial animation. By following caricaturing rules of artists, instead of mathematical formulations, 3D cartoon face exaggeration is accomplished at both global and local levels. As a result, the final exaggeration is capable of depicting the characteristics of an input face while achieving artistic deformations. In the texture processing step, texture coordinates of the vertices of the cartoon face model are obtained by mapping the parameterized grid of the standard face model to a cartoon face template and aligning the input face to the face template. Finally, 3D cartoon facial animation is implemented in the MPEG-4 animation framework. In order to avoid time-consuming construction of a face animation table, we propose to utilize the tables of existing models through model mapping. Experimental results demonstrate the effectiveness and efficiency of our proposed system.  相似文献   

20.
根据人脸的生理结构特点,使用分块造型技术进行脸部建模,提出了一个基于"脸型特征"和"五官特征"的参数化肌肉模型,通过模型上控制顶点的属性值来控制脸部形成不同的表情,并利用MAYA MEL语言实现了一个实时交互操作界面.这使得对脸部表情的研究更加方便、灵活,建模及其动画化也更加逼真、实用.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号