首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 203 毫秒
1.
一种参数化的表情映射方法*   总被引:2,自引:0,他引:2  
人脸表情细节的合成是生成高度真实感人脸动画的重要环节,传统的表情映射技术仅考虑了人脸特征点的位置变化,无法处理表情细节;最新的研究考虑了表情纹理细节的合成,但仅能根据已有的表情样本一对一地获取纹理细节,应用范围很有限。有效地结合小波金字塔分解和表情比例图像两种技术,提出了一种进行表情合成的新算法。该算法既可以映射表情纹理细节,又可以生成表情动画,且表情夸张程度可参数化地控制,算法简单有效。  相似文献   

2.
构造任意拓扑结构人脸网格的人脸动画定义表是基于MPEG-4的任意拓扑结构人脸动画系统的关键.通过搜索三维模型的二维纹理图像特征,提出一种自动地在任意拓扑结构三维人脸模型上定位特征点的方法.通过利用任意拓扑结构人脸模型上的三维特征点变形标准人脸模型,并根据标准人脸模型的动画定义表,实现了自动、准确地构造任意拓扑结构人脸模型动画定义表的方法.给定一个任意拓扑结构三维人脸模型,通过文中方法可以全自动地驱动所给人脸模型做动画.  相似文献   

3.
论文提出了一种新的基于三维人脸形变模型,并兼容于MPEG-4的三维人脸动画模型。采用基于均匀网格重采样的方法建立原型三维人脸之间的对齐,应用MPEG-4中定义的三维人脸动画规则,驱动三维模型自动生成真实感人脸动画。给定一幅人脸图像,三维人脸动画模型可自动重建其真实感的三维人脸,并根据FAP参数驱动模型自动生成人脸动画。  相似文献   

4.
王振 《电脑与信息技术》2010,18(5):11-12,37
表现人脸的皱纹特征是提高人脸动画真实感的重要因素之一,文章提出了一种基于关键帧的皱纹动画方法,使用高度图、法线图和MPEG-4人脸运动参数描述皱纹动画关键帧,通过对高度图、法线图插值生成皱纹动画中间帧。所提方法对人脸模型网格复杂度要求不高,且合成的皱纹动画具有真实感强和实时性高的特点。  相似文献   

5.
一个实用的人脸定制和表情动画编辑系统   总被引:1,自引:1,他引:1       下载免费PDF全文
介绍了一个简单而行之有效的特定人脸定制和表情动画的编辑系统,首先给定一个内嵌肌肉向量的一般人脸三维多边形网络模型并提供特定人脸的正侧面正交图象,然后采用snake技术自动适配人脸特征线,变分一般模型定制出三维虚拟人脸;接着再用多分辨率样条技术产生无缝的人脸纹理镶嵌图,进而生成高度真实感的虚拟特定人脸,由内嵌肌肉向量的参数化表示,通过编辑参数能赋予三维虚拟人脸各种丰富的表情。由于变分后人脸模型的拓扑不变,因此可用基于三角化的图象metamorphosis实现不同特定人脸间的三维morph,该系统能在廉价的PC平台上实现,其快速、简单而且具有真实感,具有很大的实用价值。  相似文献   

6.
提出了一个人脸动画与语音同步的系统,重点解决协同发音、表现人脸细微表情特征的问题。输入带有情绪标志的文本,就能够产生对应表情的、与语音同步的人脸动画;本系统能够生成各种不同性别、年龄、表情特征的高度真实感3D人脸模型,人脸细微表情特征(如额头皱纹)可以随人脸表情的变化而动态改变。基于语言学理论,本系统提出了解决协同发音问题的一套规则。  相似文献   

7.
提出一种面向未来掌上移动设备(如高端手机,PDA等)的灵活实用的纹理映射方法.该方法仅需要一张正面人脸照片作为输入,不要求模型和纹理的精确匹配,通过简单的交互实现在较低资源配置下人脸纹理的提取.采用了一种交互调整映射的方案,通过用户对模型中特征点及其影响区域的编辑,实现对局部纹理坐标的定义,得到满意的映射效果.实验结果表明,文中方法具有较高的效率和真实感,可以用于产生真实感三维人脸表情动画.  相似文献   

8.
具有真实感的三维人脸动画   总被引:10,自引:0,他引:10       下载免费PDF全文
张青山  陈国良 《软件学报》2003,14(3):643-650
具有真实感的三维人脸模型的构造和动画是计算机图形学领域中一个重要的研究课题.如何在三维人脸模型上实时地模拟人脸的运动,产生具有真实感的人脸表情和动作,是其中的一个难点.提出一种实时的三维人脸动画方法,该方法将人脸模型划分成若干个运动相对独立的功能区,然后使用提出的基于加权狄里克利自由变形DFFD(Dirichlet free-form deformation)和刚体运动模拟的混合技术模拟功能区的运动.同时,通过交叉的运动控制点模拟功能区之间运动的相互影响.在该方法中,人脸模型的运动通过移动控制点来驱动.为了简化人脸模型的驱动,提出了基于MPEG-4中脸部动画参数FAP(facial animation parameters)流和基于肌肉模型的两种高层驱动方法.这两种方法不但具有较高的真实感,而且具有良好的计算性能,能实时模拟真实人脸的表情和动作.  相似文献   

9.
Kinect驱动的人脸动画合成技术研究   总被引:1,自引:0,他引:1  
三维人脸动画合成技术可以应用于虚拟现实、角色控制等多个领域。为此,提出一种基于Kinect的人脸动画合成方法。人脸跟踪客户端利用Kinect对用户的脸部表情进行跟踪识别,得到人脸表情动画参数,通过socket发送给人脸动画合成服务器,由人脸动画合成服务器查找基于MPEG-4标准的人脸动画定义表,控制人脸模型的变形,从而实时合成与用户表情相对应的三维人脸动画。实验结果表明,该方法能够在满足实时性要求的条件下合成高质量的三维人脸动画,同现有技术相比,结构简单、容易部署且具有较好的可扩展性。  相似文献   

10.
为合成真实感人脸动画,提出一种实时的基于单摄像头的3D虚拟人脸动画方法.首先根据用户的单张正面人脸图像重建用户的3D人脸模型,并基于该人脸模型合成姿态、光照和表情变化的人脸图像,利用这些图像训练特定用户的局部纹理模型;然后使用摄像头拍摄人脸视频,利用特定用户的局部纹理模型跟踪人脸特征点;最后由跟踪结果和3D关键形状估计Blendshape系数,通过Blendshape模型合成的人脸动画.实验结果表明,该方法能实时合成真实感3D人脸动画,且只需要一个普通的摄像头,非常适合普通用户使用.  相似文献   

11.
Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A phoneme-independent expression eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and principal component analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation  相似文献   

12.
手机3D动画自动生成系统旨在通过输入一条手机短信得到与其相符的动画并发送给接收方.动画中表情对于表达情绪主题以及强化动画效果等具有重要意义.本文重点研究手机3D动画中人物表情的自动生成.主要包括定性规划和定量计算两大部分.定性规划部分主要利用语义网技术,基于面部编码系统和情感轮模型构建表情本体库并建立相应公理,然后根据短信的关键信息进行知识推理,得到关于表情的定性描述.在定量部分将定性描述信息转化为具体动画数据并处理表情平滑过渡等问题.在实验的270条数据中除人为操作引起的异常外,表情规划成功率为71.2%,其中多样性表情生成率为86.89%.实验表明该方法能够较好地生成表情动画.  相似文献   

13.
Three-dimensional (3D) cartoon facial animation is one step further than the challenging 3D caricaturing which generates 3D still caricatures only. In this paper, a 3D cartoon facial animation system is developed for a subject given only a single frontal face image of a neutral expression. The system is composed of three steps consisting of 3D cartoon face exaggeration, texture processing, and 3D cartoon facial animation. By following caricaturing rules of artists, instead of mathematical formulations, 3D cartoon face exaggeration is accomplished at both global and local levels. As a result, the final exaggeration is capable of depicting the characteristics of an input face while achieving artistic deformations. In the texture processing step, texture coordinates of the vertices of the cartoon face model are obtained by mapping the parameterized grid of the standard face model to a cartoon face template and aligning the input face to the face template. Finally, 3D cartoon facial animation is implemented in the MPEG-4 animation framework. In order to avoid time-consuming construction of a face animation table, we propose to utilize the tables of existing models through model mapping. Experimental results demonstrate the effectiveness and efficiency of our proposed system.  相似文献   

14.
High-quality animation of 2D steady vector fields   总被引:2,自引:0,他引:2  
Simulators for dynamic systems are now widely used in various application areas and raise the need for effective and accurate flow visualization techniques. Animation allows us to depict direction, orientation, and velocity of a vector field accurately. We extend a former proposal for a new approach to produce perfectly cyclic and variable-speed animations for 2D steady vector fields [B. Jobard, et al., (1997)] and [C. Chedot, et al., (1998)]. A complete animation of an arbitrary number of frames is encoded in a single image. The animation can be played using the color table animation technique, which is very effective even on low-end workstations. A cyclic set of textures can be produced as well and then encoded in a common animation format or used for texture mapping on 3D objects. As compared to other approaches, the method presented produces smoother animations and is more effective, both in memory requirements to store the animation, and in computation time.  相似文献   

15.
从正面侧照片合成三维人脸   总被引:6,自引:1,他引:5  
实现了一具交互式人脸建模和动画的工具,用户可以从一个人正面和侧面的照片构造了出头部的三维模型,并基于这个模型实现特定表情和简单的动画。详细阐述在系统实现过程中应用到的人脸几何表示,一般人脸变化到特定人脸、弹性网格、肌肉模型、全视角巾图、表情提取等技术。  相似文献   

16.
Three-dimensional computer animation often struggles to compete with the flexibility and expressiveness commonly found in traditional animation, particularly when rendered non-photorealistically. We present an animation tool that takes skeleton-driven 3D computer animations and generates expressive deformations to the character geometry. The technique is based upon the cartooning and animation concepts of “lines of action” and “lines of motion” and automatically infuses computer animations with some of the expressiveness displayed by traditional animation. Motion and pose-based expressive deformations are generated from the motion data and the character geometry is warped along each limb’s individual line of motion. The effect of this subtle, yet significant, warping is twofold: geometric inter-frame consistency is increased which helps create visually smoother animated sequences, and the warped geometry provides a novel solution to the problem of implied motion in non-photorealistic imagery. Object-space and image-space versions of the algorithm have been implemented and are presented.  相似文献   

17.
We present a lightweight non-parametric method to generate wrinkles for 3D facial modeling and animation. The key lightweight feature of the method is that it can generate plausible wrinkles using a single low-cost Kinect camera and one high quality 3D face model with details as the example. Our method works in two stages: (1) offline personalized wrinkled blendshape construction. User-specific expressions are recorded using the RGB-Depth camera, and the wrinkles are generated through example-based synthesis of geometric details. (2) Online 3D facial performance capturing. These reconstructed expressions are used as blendshapes to capture facial animations in real-time. Experiments on a variety of facial performance videos show that our method can produce plausible results, approximating the wrinkles in an accurate way. Furthermore, our technique is low-cost and convenient for common users.  相似文献   

18.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号