首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
人脸建模和动画技术是当今计算机图形学和计算机视觉领域中非常热门的研究课题,其应用包括电脑游戏、虚拟现实、视频通信、电影、编码、人脸识别和人机交互,回顾了国际上近年来在这方面丰富的研究成果。  相似文献   

2.
1.引言人脸建模与动画(face modeling and animation)是计算机图形学中最富有挑战性的课题之一。这是因为:首先,人脸的几何形状非常复杂,其表面不但具有无数细小的皱纹,而且呈现颜色和纹理的微妙变化,因此建立精确的人脸模型、生成真实感人脸非常困难;其次,脸部运动是骨骼、肌肉、皮下组织和皮肤共同作用的结果,其运动机理非常复杂,因此生成真实感人脸动画非常困难;另外,我们人类生来就具有一种识别和  相似文献   

3.
本文讨论动画系统的一个子集的发展.中间目标是产生一个系统,它基于人体面部的视频输入,能实时产生人面的3D模型动画.首要的主题是模型操作、设计和配合、声频视频操作和同步、3D模型演示和表演、图像处理和特征提取.用连续的红绿蓝图像,在一个时间抓一帧画面,图像处理被计时.实验表明,用全面搜索定位面部特征大约需要70ms,用局部搜索定位特征大约5ms.考虑到模型显示,模型显示用OpenInventor成功完成.  相似文献   

4.
In this paper we present a new paradigm for the generation and retargeting of facial animation. Like a vast majority of the approaches that have adressed these topics, our formalism is built on blendshapes. However, where prior works have generally encoded facial geometry using a low dimensional basis of these blendshapes, we propose to encode facial dynamics by looking at blendshapes as a basis of forces rather than a basis of shapes. We develop this idea into a dynamic model that naturally combines the blendshapes paradigm with physics‐based techniques for the simulation of deforming meshes. Because it escapes the linear span of the shape basis through time‐integration and physics‐inspired simulation, this approach has a wider expressive range than previous blendshape‐based methods. Its inherent physically‐based formulation also enables the simulation of more advanced physical interactions, such as collision responses on lip contacts.  相似文献   

5.
We propose a design framework to assist with user‐generated content in facial animation — without requiring any animation experience or ground truth reference. Where conventional prototyping methods rely on handcrafting by experienced animators, our approach looks to encode the role of the animator as an Evolutionary Algorithm acting on animation controls, driven by visual feedback from a user. Presented as a simple interface, users sample control combinations and select favourable results to influence later sampling. Over multiple iterations of disregarding unfavourable control values, parameters converge towards the user's ideal. We demonstrate our framework through two non‐trivial applications: creating highly nuanced expressions by evolving control values of a face rig and non‐linear motion through evolving control point positions of animation curves.  相似文献   

6.
罗常伟  於俊  汪增福 《自动化学报》2014,40(10):2245-2252
描述了一种实时的视频驱动的人脸动画合成系统.通过该系统,用户只要在摄像头前面表演各种脸部动作,就可以控制虚拟人脸的表情.首先,建立一个基于肌肉的三维人脸模型,并使用肌肉激励参数控制人脸形变.为提高人脸动画的真实感,将口轮匝肌分解为外圈和内圈两部分,同时建立脸部形变与下颌转动的关系.然后,使用一种实时的特征点跟踪算法跟踪视频中人脸的特征点.最后,将视频跟踪结果转换为肌肉激励参数以驱动人脸动画.实验结果表明,该系统能实时运行,合成的动画也具有较强真实感.与大部分现有的视频驱动的人脸动画方法相比,该系统不需要使用脸部标志和三维扫描设备,极大地方便了用户使用.  相似文献   

7.
Facial animation is a time‐consuming and cumbersome task that requires years of experience and/or a complex and expensive set‐up. This becomes an issue, especially when animating the multitude of secondary characters required, e.g. in films or video‐games. We address this problem with a novel technique that relies on motion graphs to represent a landmarked database. Separate graphs are created for different facial regions, allowing a reduced memory footprint compared to the original data. The common poses are identified using a Euclidean‐based similarity metric and merged into the same node. This process traditionally requires a manually chosen threshold, however, we simplify it by optimizing for the desired graph compression. Motion synthesis occurs by traversing the graph using Dijkstra's algorithm, and coherent noise is introduced by swapping some path nodes with their neighbours. Expression labels, extracted from the database, provide the control mechanism for animation. We present a way of creating facial animation with reduced input that automatically controls timing and pose detail. Our technique easily fits within video‐game and crowd animation contexts, allowing the characters to be more expressive with less effort. Furthermore, it provides a starting point for content creators aiming to bring more life into their characters.  相似文献   

8.
自适应人脸语音动画技术   总被引:3,自引:0,他引:3  
介绍了一个基于特征多边形网格模型和肌肉模型的人脸语音动画系统。首先从3D MAX导出的ASE文件建立人脸多边形网格模型,在四边形网格中定义了眼睛、鼻子、眉毛、额头、面颊、上下颚等特征点。基于特征点把人脸网格模型构造为线性弹性模型,并进行纹理贴图,最后在建立的肌肉模型基础上实现人脸语音动画。  相似文献   

9.
基于广义对称变换的人脸检测和面部特征提取   总被引:4,自引:0,他引:4  
杜平  张燕昆  刘重庆 《计算机仿真》2003,20(2):117-119,64
该文提出了一种基于广义对称变换的从具有复杂背景的彩色图像中进行人脸检测的方法,它首先利用人类肤色在色度空间分布的稳定性,检测出图像中的皮肤区域,通过先验知识进行甄别,选出候选人脸区域,然后根据在人脸图像中人的双眼具有的高对称性,利用广义对称变换来求得具有高对称性的点即为双眼的位置,最后采用通用人眼图像进行模板匹配来验证,实验证明,该方法具有良好的效果。  相似文献   

10.
描述了一个计算机人脸合成系统的设计目标、系统的结构和功能以及实现技术,并给出了计算机人脸合成系统的示例。  相似文献   

11.
人脸识别方法的综述与展望   总被引:7,自引:0,他引:7  
综述了人脸识别理论的概念和研究现状,讨论了其中的关键技术和难点以及应用和发展前景,最后对人脸识别研究中的有关问题提出了我们的看法.  相似文献   

12.
This paper describes interactive facilities for simulating abstract muscle actions using Rational Free Form Deformations (RFFD). The particular muscle action is simulated as the displacement of the control points of the control-unit for an RFFD defined on a region of interest. One or several simulated muscle actions constitute a Minimum Perceptible Action (MPA), which is defined as the atomic action unit, similar to Action Unit (AU) of the Facial Action Coding System (FACS), to build an expression.  相似文献   

13.
考虑到不同部件(眼睛,嘴等)对人脸分析的贡献差别,提出基于多部件稀疏编码的人脸图像分析方法.首先,选取对人脸(表情)分析影响较大的几个人脸部件,然后,利用多视角稀疏编码方法学习各部件的字典,并计算相应的稀疏编码,最后,将稀疏编码输入分类器(支持向量机和最小均方误差)进行判决.分别在数据库JAFFE和Yale上进行人脸(表情)识别及有遮挡的人脸(表情)识别实验.实验结果表明,基于多部件稀疏编码的人脸分析能较好地调节各部件的权重,优于各单一部件和简单的多部件融合方法的性能.  相似文献   

14.
Animation of a B-Spline figure   总被引:5,自引:1,他引:4  
Summary In this paper we describe how the use of B-Spline surfaces allows lissom movements of body and face. Our method is empirical, based on a parametrical animation. It can be combined with a muscles model for facial animation as we illustrated for the speech.  相似文献   

15.
We present a novel data-driven skinning model—rigidity-aware skinning (RAS) model, for simulating both active and passive 3D facial animation of different identities in real time. Our model builds upon a linear blend skinning (LBS) scheme, where the bone set and skinning weights are shared for diverse identities and learned from the data via a sparse and localized skinning decomposition algorithm. Our model characterizes the animated face into the active expression and the passive deformation: The former is represented by an LBS-based multi-linear model learned from the FaceWareHouse data set, and the latter is represented by a spatially varying as-rigid-as-possible deformation applied to the LBS-based multi-linear model, whose rigidity parameters are learned from the data by a novel rigidity estimation algorithm. Our RAS model is not only generic and expressive for faithfully modelling medium-scale facial deformation, but also compact and lightweight for generating vivid facial animation in real time. We validate the efficiency and effectiveness of our RAS model for real-time 3D facial animation and expression editing.  相似文献   

16.
A statistical analysis of shapes of facial surfaces can play an important role in biometric authentication and other face-related applications. The main difficulty in developing such an analysis comes from the lack of a canonical system to represent and compare all facial surfaces. This paper suggests a specific, yet natural, coordinate system on facial surfaces, that enables comparisons of their shapes. Here a facial surface is represented as an indexed collection of closed curves, called facial curves, that are level curves of a surface distance function from the tip of the nose. Defining the space of all such representations of face, this paper studies its differential geometry and endows it with a Riemannian metric. It presents numerical techniques for computing geodesic paths between facial surfaces in that space. This Riemannian framework is then used to: (i) compute distances between faces to quantify differences in their shapes, (ii) find optimal deformations between faces, and (iii) define and compute average of a given set of faces. Experimental results generated using laser-scanned faces are presented to demonstrate these ideas.  相似文献   

17.
We present the first realtime method for generating facial animations enhanced by physical simulation from realtime performance capture data. Unlike purely data‐based techniques, our method is able to produce physical effects on the fly through the simulation of volumetric skin behaviour, lip contacts and sticky lips. It remains however practical as it does not require any physical/medical data which are complex to acquire and process, and instead relies only on the input of a blendshapes model. We achieve realtime performance on the CPU by introducing an efficient progressive Projective Dynamics solver to efficiently solve the physical integration steps even when confronted to constantly changing constraints. Also key to our realtime performance is a new Taylor approximation and memoization scheme for the computation of the Singular Value Decompositions required for the simulation of volumetric skin. We demonstrate the applicability of our method by animating blendshape characters from a simple webcam feed .  相似文献   

18.
MPEG—4中脸部动画参数和序列重绘的肌肉模型   总被引:2,自引:0,他引:2       下载免费PDF全文
MPEG-4中定义了“人脸对象”这样一种特殊的视频对象,并通过脸部动画参数FAP和脸部定义参数FDP来对这类对象进行编码,以实现极低码率的视频编码。通过对MPEG-4中“人脸对象”这类视频码流的句法结构和参数编码方法的详细分析。以及通过对MEPG-4解码器图象重绘(rendering)过程的研究,在Waters的以肌肉收缩强度为参数的肌肉模型基础上,提出了更适应于MPEG-4参数的位移控制肌肉模型(displacement-controlling muscle model),从而实现了通过利用MEPG-4码流中的FAP和FDP参数来重建自然表情的人脸视频序列。  相似文献   

19.
针对现有语音生成说话人脸视频方法忽略说话人头部运动的问题,提出基于关键点表示的语音驱动说话人脸视频生成方法.分别利用人脸的面部轮廓关键点和唇部关键点表示说话人的头部运动信息和唇部运动信息,通过并行多分支网络将输入语音转换到人脸关键点,通过连续的唇部关键点和头部关键点序列及模板图像最终生成面部人脸视频.定量和定性实验表明,文中方法能合成清晰、自然、带有头部动作的说话人脸视频,性能指标较优.  相似文献   

20.
本文报告了一种多姿态人脸图象识别原型系统,它不同于现有系统和方法,该系统可工作在合作对象下允许姿态变化(存在图象平面内旋转和深度方向上旋转,限于双眼可见)的人脸图象识别。由于对成象条件有所放松,故可望应用于身份验证、保安和视频会议等领域。对姿态可变条件下的人脸特征检测、姿态估计、识别建模以及基于模板相关的匹配等技术进行了深人研究,分析了光照、姿态及分辨率变化等因素对识别的影响程度。实验结果表明,对于30类人脸,每人18幅图象大小的测试集,达到了100%的识别率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号