首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
1.引言人脸建模与动画(face modeling and animation)是计算机图形学中最富有挑战性的课题之一。这是因为:首先,人脸的几何形状非常复杂,其表面不但具有无数细小的皱纹,而且呈现颜色和纹理的微妙变化,因此建立精确的人脸模型、生成真实感人脸非常困难;其次,脸部运动是骨骼、肌肉、皮下组织和皮肤共同作用的结果,其运动机理非常复杂,因此生成真实感人脸动画非常困难;另外,我们人类生来就具有一种识别和  相似文献   

2.
MPEG-4提出的基于对象的编码格式,将人脸作为一个特殊的对象,为人脸建模和动画研究奠定了基础。本文通过对MPEG-4人脸动画标准的分析,提出基于MPEG-4人脸动画系统的设计思想和需解决的关键问题。  相似文献   

3.
In this paper, we present a system for real-time performance-driven facial animation. With the system, the user can control the facial expression of a digital character by acting out the desired facial action in front of an ordinary camera. First,we create a muscle-based 3D face model. The muscle actuation parameters are used to animate the face model. To increase the reality of facial animation, the orbicularis oris in our face model is divided into the inner part and outer part. We also establish the relationship between jaw rotation and facial surface deformation. Second, a real-time facial tracking method is employed to track the facial features of a performer in the video. Finally, the tracked facial feature points are used to estimate muscle actuation parameters to drive the face model. Experimental results show that our system runs in real time and outputs realistic facial animations.Compared with most existing performance-based facial animation systems, ours does not require facial markers, intrusive lighting,or special scanning equipment, thus it is inexpensive and easy to use.  相似文献   

4.
在人机交互、数字娱乐等领域,传统的表情合成技术难以稳定地生成具有真实感的个性化人脸表情动画.为此,提出一种基于单张图像的三维人脸建模和表情动画系统.自动检测人脸在图像中的位置,自动定位人脸上的关键点,基于这些关键点和形变模型重建个性化三维人脸模型,对重建的人脸模型进行扩展得到完整的人脸网格,采用稀疏关键点控制的动画数据映射方法来驱动重建的人脸模型生成动态表情动画.实验结果表明,该方法稳定性强、自动化程度高,人脸模型与表情动画比较逼真.  相似文献   

5.
罗常伟  於俊  汪增福 《自动化学报》2014,40(10):2245-2252
描述了一种实时的视频驱动的人脸动画合成系统.通过该系统,用户只要在摄像头前面表演各种脸部动作,就可以控制虚拟人脸的表情.首先,建立一个基于肌肉的三维人脸模型,并使用肌肉激励参数控制人脸形变.为提高人脸动画的真实感,将口轮匝肌分解为外圈和内圈两部分,同时建立脸部形变与下颌转动的关系.然后,使用一种实时的特征点跟踪算法跟踪视频中人脸的特征点.最后,将视频跟踪结果转换为肌肉激励参数以驱动人脸动画.实验结果表明,该系统能实时运行,合成的动画也具有较强真实感.与大部分现有的视频驱动的人脸动画方法相比,该系统不需要使用脸部标志和三维扫描设备,极大地方便了用户使用.  相似文献   

6.
基于物理模型的人脸表情动画技术研究   总被引:4,自引:0,他引:4  
用计算机建立人脸表情动画是当前计算机图形学研究领域的一个富有挑战性的课题,该文在总结了国内外有关该课题研究方法的基础上,提出了一种基于物理模型的人脸表情画生成算法,并依该算法计和开发了一个实际的人脸表情动画系统HUFACE。该算法将人的脸部模拟为一个弹性体,为使计算简化,又将人脸表面依其生理特性分为八个子块,脸部表情所产生的五官动作模拟为弹性体的形变,并建立相应的弹性形变模型,当脸部表情引起脸部各子块形变时,每个子块上的各点将发生位移,于是利用该模型计算这些点的位移量,由此获得表情动画中的每一帧画面,由于脸部动作由该形变模型控制,且计算简单,速度快,因此不需存储表情动画中的各个画面,提高了系统的效率,实验结果表明,由HUFACE系统生成的人脸表情真实,自然。  相似文献   

7.
人体动画制作技术是计算机动画领域内的研究热点和难点。在制作真实感人体动画时,除了有真实的人体运动和灵活的运动控制方法外,还需要有逼真的人体造型和皮肤变形效果。为了使计算机动画研究领域的研究人员对当前各种人体建模与皮肤变形技术有较全面的了解,对计算机动画中的真实感人体建模与皮肤变形技术进行了较为全面的阐述,将现有的方法分为三大类:基于面模型的方法、基于体模型的方法和基于层次式模型的方法,并分析和比较了这些方法的优缺点。在回顾了现有的人体建模与变形技术的基础上指出,3维扫描技术的发展使人体建模和皮肤变形的研究面临新的契机。如何充分利用基于扫描技术建模的优点,并结合层次式建模与变形方法的灵活性的特点,创作出高度真实感的人体皮肤模型及其变形效果,是未来研究的重要方向。  相似文献   

8.
Modeling and Animating Realistic Faces from Images   总被引:4,自引:0,他引:4  
We present a new set of techniques for modeling and animating realistic faces from photographs and videos. Given a set of face photographs taken simultaneously, our modeling technique allows the interactive recovery of a textured 3D face model. By repeating this process for several facial expressions, we acquire a set of face models that can be linearly combined to express a wide range of expressions. Given a video sequence, this linear face model can be used to estimate the face position, orientation, and facial expression at each frame. We illustrate these techniques on several datasets and demonstrate robust estimations of detailed face geometry and motion.  相似文献   

9.
从历史发展和当前的研究现状的角度对三维人脸建模和动画算法进行综述,通过对三维捕获、参数化人脸建模和动画、表情克隆等问题及其典型算法的基本原理、优缺点等的分析,对它们的特性进行描述和比较.展望进一步值得研究的问题和方向.以期读者对三维人脸建模和动画的主流技术有较全面的了解,并对未来的研究工作有所帮助.  相似文献   

10.
This paper presents a novel approach for the generation of realistic speech synchronized 3D facial animation that copes with anticipatory and perseveratory coarticulation. The methodology is based on the measurement of 3D trajectories of fiduciary points marked on the face of a real speaker during the speech production of CVCV non-sense words. The trajectories are measured from standard video sequences using stereo vision photogrammetric techniques. The first stationary point of each trajectory associated with a phonetic segment is selected as its articulatory target. By clustering according to geometric similarity all articulatory targets of a same segment in different phonetic contexts, a set of phonetic context-dependent visemes accounting for coarticulation is identified. These visemes are then used to drive a set of geometric transformation/deformation models that reproduce the rotation and translation of the temporomandibular joint on the 3D virtual face, as well as the behavior of the lips, such as protrusion, and opening width and height of the natural articulation. This approach is being used to generate 3D speech synchronized animation from both natural and synthetic speech generated by a text-to-speech synthesizer.  相似文献   

11.
We present techniques for improving performance driven facial animation, emotion recognition, and facial key-point or landmark prediction using learned identity invariant representations. Established approaches to these problems can work well if sufficient examples and labels for a particular identity are available and factors of variation are highly controlled. However, labeled examples of facial expressions, emotions and key-points for new individuals are difficult and costly to obtain. In this paper we improve the ability of techniques to generalize to new and unseen individuals by explicitly modeling previously seen variations related to identity and expression. We use a weakly-supervised approach in which identity labels are used to learn the different factors of variation linked to identity separately from factors related to expression. We show how probabilistic modeling of these sources of variation allows one to learn identity-invariant representations for expressions which can then be used to identity-normalize various procedures for facial expression analysis and animation control. We also show how to extend the widely used techniques of active appearance models and constrained local models through replacing the underlying point distribution models which are typically constructed using principal component analysis with identity–expression factorized representations. We present a wide variety of experiments in which we consistently improve performance on emotion recognition, markerless performance-driven facial animation and facial key-point tracking.  相似文献   

12.
王振 《电脑与信息技术》2010,18(5):11-12,37
表现人脸的皱纹特征是提高人脸动画真实感的重要因素之一,文章提出了一种基于关键帧的皱纹动画方法,使用高度图、法线图和MPEG-4人脸运动参数描述皱纹动画关键帧,通过对高度图、法线图插值生成皱纹动画中间帧。所提方法对人脸模型网格复杂度要求不高,且合成的皱纹动画具有真实感强和实时性高的特点。  相似文献   

13.
Expressive facial animations are essential to enhance the realism and the credibility of virtual characters. Parameter‐based animation methods offer a precise control over facial configurations while performance‐based animation benefits from the naturalness of captured human motion. In this paper, we propose an animation system that gathers the advantages of both approaches. By analyzing a database of facial motion, we create the human appearance space. The appearance space provides a coherent and continuous parameterization of human facial movements, while encapsulating the coherence of real facial deformations. We present a method to optimally construct an analogous appearance face for a synthetic character. The link between both appearance spaces makes it possible to retarget facial animation on a synthetic face from a video source. Moreover, the topological characteristics of the appearance space allow us to detect the principal variation patterns of a face and automatically reorganize them on a low‐dimensional control space. The control space acts as an interactive user‐interface to manipulate the facial expressions of any synthetic face. This interface makes it simple and intuitive to generate still facial configurations for keyframe animation, as well as complete temporal sequences of facial movements. The resulting animations combine the flexibility of a parameter‐based system and the realism of real human motion. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

14.
三维人脸表情动画技术是一个具有巨大应用前景和研究意义的学科方向。在研究 现有表情捕捉和动画合成技术的基础上,提出了一种基于Microsoft Kinect 设备的人脸表情动画 系统。该系统首先利用Kinect 设备捕捉人脸并提取相关表情参数,同时利用Autodesk Maya 动 画软件建立对应的人脸三维模型,之后将模型导入到OGRE 动画引擎并把提取的表情参数传递 给OGRE,渲染生成实时人脸表情动画。实验结果表明该技术方案可行,实时表情动画效果达 到实际应用水平,相比于现有其他表情动画技术,系统采用通用的Kinect 设备,成本更大众化 且更容易进行操作。  相似文献   

15.
具有真实感的三维人脸动画   总被引:10,自引:0,他引:10       下载免费PDF全文
张青山  陈国良 《软件学报》2003,14(3):643-650
具有真实感的三维人脸模型的构造和动画是计算机图形学领域中一个重要的研究课题.如何在三维人脸模型上实时地模拟人脸的运动,产生具有真实感的人脸表情和动作,是其中的一个难点.提出一种实时的三维人脸动画方法,该方法将人脸模型划分成若干个运动相对独立的功能区,然后使用提出的基于加权狄里克利自由变形DFFD(Dirichlet free-form deformation)和刚体运动模拟的混合技术模拟功能区的运动.同时,通过交叉的运动控制点模拟功能区之间运动的相互影响.在该方法中,人脸模型的运动通过移动控制点来驱动.为了简化人脸模型的驱动,提出了基于MPEG-4中脸部动画参数FAP(facial animation parameters)流和基于肌肉模型的两种高层驱动方法.这两种方法不但具有较高的真实感,而且具有良好的计算性能,能实时模拟真实人脸的表情和动作.  相似文献   

16.
This paper describes a technique for the automatic adaptation of a canonical facial model to data obtained by a 3D laser scanner. The facial model is a B-spline surface with 13×16 control points. We introduce a technique by which this canonical model is fit to the scanned data and that takes into consideration the requirements for the animation of facial expressions. The animation of facial expressions is based on the facial action coding system (FACS). Using B-splines in combination with FACS, we automatically create the impression of a moving skin. To increase the realism of the animation we map textural information onto the B-spline surface.  相似文献   

17.
18.
We propose a coupled hidden Markov model (CHMM) approach to video-realistic speech animation, which realizes realistic facial animations driven by speaker independent continuous speech. Different from hidden Markov model (HMM)-based animation approaches that use a single-state chain, we use CHMMs to explicitly model the subtle characteristics of audio-visual speech, e.g., the asynchrony, temporal dependency (synchrony), and different speech classes between the two modalities. We derive an expectation maximization (EM)-based A/V conversion algorithm for the CHMMs, which converts acoustic speech into decent facial animation parameters. We also present a video-realistic speech animation system. The system transforms the facial animation parameters to a mouth animation sequence, refines the animation with a performance refinement process, and finally stitches the animated mouth with a background facial sequence seamlessly. We have compared the animation performance of the CHMM with the HMMs, the multi-stream HMMs and the factorial HMMs both objectively and subjectively. Results show that the CHMMs achieve superior animation performance. The ph-vi-CHMM system, which adopts different state variables (phoneme states and viseme states) in the audio and visual modalities, performs the best. The proposed approach indicates that explicitly modelling audio-visual speech is promising for speech animation.  相似文献   

19.
根据人脸的生理结构特点,使用分块造型技术进行脸部建模,提出了一个基于"脸型特征"和"五官特征"的参数化肌肉模型,通过模型上控制顶点的属性值来控制脸部形成不同的表情,并利用MAYA MEL语言实现了一个实时交互操作界面.这使得对脸部表情的研究更加方便、灵活,建模及其动画化也更加逼真、实用.  相似文献   

20.
This paper describes a new method for generating facial animation in which facial expression and shape can be changed simultaneously in real time. A 2D parameter space independent of facial shape is defined, on which facial expressions are superimposed so that the expressions can be applied to various facial shapes. A facial model is transformed by a bilinear interpolation, which enables a rapid change in facial expression with metamorphosis. The practical efficiency of this method has been demonstrated by a real-time animation system based on this method in live theater.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号