首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
Oftentimes facial animation is created separately from overall body motion. Since convincing facial animation is challenging enough in itself, artists tend to create and edit the face motion in isolation. Or if the face animation is derived from motion capture, this is typically performed in a mo‐cap booth while sitting relatively still. In either case, recombining the isolated face animation with body and head motion is non‐trivial and often results in an uncanny result if the body dynamics are not properly reflected on the face (e.g. the bouncing of facial tissue when running). We tackle this problem by introducing a simple and intuitive system that allows to add physics to facial blendshape animation. Unlike previous methods that try to add physics to face rigs, our method preserves the original facial animation as closely as possible. To this end, we present a novel simulation framework that uses the original animation as per‐frame rest‐poses without adding spurious forces. As a result, in the absence of any external forces or rigid head motion, the facial performance will exactly match the artist‐created blendshape animation. In addition we propose the concept of blendmaterials to give artists an intuitive means to account for changing material properties due to muscle activation. This system allows to automatically combine facial animation and head motion such that they are consistent, while preserving the original animation as closely as possible. The system is easy to use and readily integrates with existing animation pipelines.  相似文献   

2.
In this paper we present a new paradigm for the generation and retargeting of facial animation. Like a vast majority of the approaches that have adressed these topics, our formalism is built on blendshapes. However, where prior works have generally encoded facial geometry using a low dimensional basis of these blendshapes, we propose to encode facial dynamics by looking at blendshapes as a basis of forces rather than a basis of shapes. We develop this idea into a dynamic model that naturally combines the blendshapes paradigm with physics‐based techniques for the simulation of deforming meshes. Because it escapes the linear span of the shape basis through time‐integration and physics‐inspired simulation, this approach has a wider expressive range than previous blendshape‐based methods. Its inherent physically‐based formulation also enables the simulation of more advanced physical interactions, such as collision responses on lip contacts.  相似文献   

3.
Blendshapes are the most commonly used approach to realistic facial animation in production. A blendshape model typically begins with a relatively small number of blendshape targets reflecting major muscles or expressions. However, the majority of the effort in constructing a production quality model occurs in the subsequent addition of targets needed to reproduce various subtle expressions and correct for the effects of various shapes in combination. To make this subsequent modeling process much more efficient, we present a novel editing method that removes the need for much of the iterative trial-and-error decomposition of an expression into targets. Isolated problematic frames of an animation are re-sculpted as desired and used as training for a nonparametric regression that associates these shapes with the underlying blendshape weights. Using this technique, the artist’s correction to a problematic expression is automatically applied to similar expressions in an entire sequence, and indeed to all future sequences. The extent and falloff of editing is controllable and the effect is continuously propagated to all similar expressions. In addition, we present a search scheme that allows effective reuse of pre-sculpted editing examples. Our system greatly reduces time and effort required by animators to create high quality facial animations.  相似文献   

4.
5.
基于MPEG-4的人脸表情图像变形研究   总被引:1,自引:0,他引:1       下载免费PDF全文
为了实时地生成自然真实的人脸表情,提出了一种基于MPEG-4人脸动画框架的人脸表情图像变形方法。该方法首先采用face alignment工具提取人脸照片中的88个特征点;接着在此基础上,对标准人脸网格进行校准变形,以进一步生成特定人脸的三角网格;然后根据人脸动画参数(FAP)移动相应的面部关键特征点及其附近的关联特征点,并在移动过程中保证在多个FAP的作用下的人脸三角网格拓扑结构不变;最后对发生形变的所有三角网格区域通过仿射变换进行面部纹理填充,生成了由FAP所定义的人脸表情图像。该方法的输入是一张中性人脸照片和一组人脸动画参数,输出是对应的人脸表情图像。为了实现细微表情动作和虚拟说话人的合成,还设计了一种眼神表情动作和口内细节纹理的生成算法。基于5分制(MOS)的主观评测实验表明,利用该人脸图像变形方法生成的表情脸像自然度得分为3.67。虚拟说话人合成的实验表明,该方法具有很好的实时性,在普通PC机上的平均处理速度为66.67 fps,适用于实时的视频处理和人脸动画的生成。  相似文献   

6.
This paper describes a novel real‐time end‐to‐end system for facial expression transformation, without the need of any driving source. Its core idea is to directly generate desired and photo‐realistic facial expressions on top of input monocular RGB video. Specifically, an unpaired learning framework is developed to learn the mapping between any two facial expressions in the facial blendshape space. Then, it automatically transforms the source expression in an input video clip to a specified target expression through the combination of automated 3D face construction, the learned bi‐directional expression mapping and automated lip correction. It can be applied to new users without additional training. Its effectiveness is demonstrated through many experiments on faces from live and online video, with different identities, ages, speeches and expressions.  相似文献   

7.
We present a lightweight non-parametric method to generate wrinkles for 3D facial modeling and animation. The key lightweight feature of the method is that it can generate plausible wrinkles using a single low-cost Kinect camera and one high quality 3D face model with details as the example. Our method works in two stages: (1) offline personalized wrinkled blendshape construction. User-specific expressions are recorded using the RGB-Depth camera, and the wrinkles are generated through example-based synthesis of geometric details. (2) Online 3D facial performance capturing. These reconstructed expressions are used as blendshapes to capture facial animations in real-time. Experiments on a variety of facial performance videos show that our method can produce plausible results, approximating the wrinkles in an accurate way. Furthermore, our technique is low-cost and convenient for common users.  相似文献   

8.
This paper presents new methods for efficient modeling and animation of an hierarchical facial model that conforms to the human face anatomy for realistic and fast 3D facial expression synthesis. The facial model has a skin–muscle–skull structure. The deformable skin model directly simulates the nonlinear visco‐elastic behavior of soft tissue and effectively prevents model collapse. The construction of facial muscles is achieved by using an efficient muscle mapping approach. Based on a cylindrical projection of the texture‐mapped facial surface and wire‐frame skin and skull meshes, this approach ensures different muscles to be located at the anatomically correct positions between the skin and skull layers. For computational efficiency, we devise an adaptive simulation algorithm which uses either a semi‐implicit integration scheme or a quasi‐static solver to compute the relaxation by traversing the designed data structures in a breadth‐first order. The algorithm runs in real‐time and has successfully synthesized realistic facial expressions. ACM CSS: I.3.5 Computer Graphics: Computational Geometry and Object Modelling—physically based modelling; I.3.7 Computer Graphics: Three‐Dimensional Graphics and Realism—animation;  相似文献   

9.
Reanimating Faces in Images and Video   总被引:8,自引:0,他引:8  
  相似文献   

10.
We propose a design framework to assist with user‐generated content in facial animation — without requiring any animation experience or ground truth reference. Where conventional prototyping methods rely on handcrafting by experienced animators, our approach looks to encode the role of the animator as an Evolutionary Algorithm acting on animation controls, driven by visual feedback from a user. Presented as a simple interface, users sample control combinations and select favourable results to influence later sampling. Over multiple iterations of disregarding unfavourable control values, parameters converge towards the user's ideal. We demonstrate our framework through two non‐trivial applications: creating highly nuanced expressions by evolving control values of a face rig and non‐linear motion through evolving control point positions of animation curves.  相似文献   

11.
You  Xiangyu  Tian  F.  Tang  W. 《Multimedia Tools and Applications》2019,78(18):25569-25590
Multimedia Tools and Applications - Adding physics to facial blendshape animation is an active research topic. Existing physics-based approaches of facial blendshape animation are numerical, so...  相似文献   

12.
Facial expressions are one of most intuitive way for expressing emotions, and can facilitate human-computer interaction by enabling users to communicate with computers using more natural ways. Besides, the hair can be designed to enhance the expression of emotions particularly. To visualize the emotions in multiple aspects for completeness, we propose a realistic visual emotional synthesis system based on the combination of facial expression and hairstyle in this paper. Firstly, facial expression is synthesized by the anatomical model and parameterized model. Secondly, cartoonish hairstyle is synthesized to describe emotion implicitly by the mass-spring model and cantilever beam model. Finally, the synthesis results of facial expression and hairstyle are combined to produce a complete visual emotion synthesis result. Experiment results demonstrate the proposed system can synthesize realistic animation, and the emotion expressiveness by combining of face and hair outperform that by face or hair alone.  相似文献   

13.
Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A phoneme-independent expression eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and principal component analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation  相似文献   

14.
In Visual Effects, the creation of realistic facial performances is still a challenge that the industry is trying to overcome. Blendshape deformation is used to reproduce the action of different groups of muscles, which produces realistic static results. However, this is not sufficient to generate believable and detailed facial performances of animated digital characters. To increase the realism of facial performances, it is possible to enhance standard facial rigs using physical simulation approaches. However, setting up a simulation rig and controlling material properties according to the performance is not an easy task and could take a lot of time and iterations to get it right. We present a workflow that allows us to generate an activation map for the fibres of a set of superficial patches we call pseudo-muscles. The pseudo-muscles are automatically identified using k-means to cluster the data from the blendshape targets in the animation rig and compute the direction of their contraction (direction of the pseudo-muscle fibres). We use an Extended Position–Based Dynamics solver to add physical simulation to the facial animation, controlling the behaviour of simulation through the activation map. We show the results achieved using the proposed solution on two digital humans and one fantastic cartoon character, demonstrating that the identified pseudo-muscles approximate facial anatomy and the simulation properties are properly controlled, increasing the realism while preserving the work of animators.  相似文献   

15.
This paper presents a parametric performance‐based model for facial animation that was inspired by Facial Action Coding System (FACS) developed by P. Ekman and F. W. Friesen. The FACS consists of 44 Action Units (AUs) corresponding to visual changes on the face. Additionally, predefined co‐occurrence rules describe how different AUs influence each other. In our model, each facial animation parameter corresponds to one of the AUs as defined in FACS. Implementation of the model is completed with methods for accumulating displacement from separate AUs together, and fuzzy‐logical adaptation of co‐occurrence rules from the FACS. We also describe the method for adapting our model to a specific person.  相似文献   

16.
We present a new real‐time approach to simulate deformable objects using a learnt statistical model to achieve a high degree of realism. Our approach improves upon state‐of‐the‐art interactive shape‐matching meshless simulation methods by not only capturing important nuances of an object's kinematics but also of its dynamic texture variation. We are able to achieve this in an automated pipeline from data capture to simulation. Our system allows for the capture of idiosyncratic characteristics of an object's dynamics which for many simulations (e.g. facial animation) is essential. We allow for the plausible simulation of mechanically complex objects without knowledge of their inner workings. The main idea of our approach is to use a flexible statistical model to achieve a geometrically‐driven simulation that allows for arbitrarily complex yet easily learned deformations while at the same time preserving the desirable properties (stability, speed and memory efficiency) of current shape‐matching simulation systems. The principal advantage of our approach is the ease with which a pseudo‐mechanical model can be learned from 3D scanner data to yield realistic animation. We present examples of non‐trivial biomechanical objects simulated on a desktop machine in real‐time, demonstrating superior realism over current geometrically motivated simulation techniques.  相似文献   

17.
从正面侧照片合成三维人脸   总被引:6,自引:1,他引:5  
实现了一具交互式人脸建模和动画的工具,用户可以从一个人正面和侧面的照片构造了出头部的三维模型,并基于这个模型实现特定表情和简单的动画。详细阐述在系统实现过程中应用到的人脸几何表示,一般人脸变化到特定人脸、弹性网格、肌肉模型、全视角巾图、表情提取等技术。  相似文献   

18.
This paper presents a novel data‐driven expressive speech animation synthesis system with phoneme‐level controls. This system is based on a pre‐recorded facial motion capture database, where an actress was directed to recite a pre‐designed corpus with four facial expressions (neutral, happiness, anger and sadness). Given new phoneme‐aligned expressive speech and its emotion modifiers as inputs, a constrained dynamic programming algorithm is used to search for best‐matched captured motion clips from the processed facial motion database by minimizing a cost function. Users optionally specify ‘hard constraints’ (motion‐node constraints for expressing phoneme utterances) and ‘soft constraints’ (emotion modifiers) to guide this search process. We also introduce a phoneme–Isomap interface for visualizing and interacting phoneme clusters that are typically composed of thousands of facial motion capture frames. On top of this novel visualization interface, users can conveniently remove contaminated motion subsequences from a large facial motion dataset. Facial animation synthesis experiments and objective comparisons between synthesized facial motion and captured motion showed that this system is effective for producing realistic expressive speech animations.  相似文献   

19.
Caricature is an interesting art to express exaggerated views of different persons and things through drawing. The face caricature is popular and widely used for different applications. To do this, we have to properly extract unique/specialized features of a person's face. A person's facial feature not only depends on his/her natural appearance, but also the associated expression style. Therefore, we would like to extract the neutural facial features and personal expression style for different applicaions. In this paper, we represent the 3D neutral face models in BU–3DFE database by sparse signal decomposition in the training phase. With this decomposition, the sparse training data can be used for robust linear subspace modeling of public faces. For an input 3D face model, we fit the model and decompose the 3D model geometry into a neutral face and the expression deformation separately. The neutral geomertry can be further decomposed into public face and individualized facial feature. We exaggerate the facial features and the expressions by estimating the probability on the corresponding manifold. The public face, the exaggerated facial features and the exaggerated expression are combined to synthesize a 3D caricature for a 3D face model. The proposed algorithm is automatic and can effectively extract the individualized facial features from an input 3D face model to create 3D face caricature.  相似文献   

20.
This paper provides a comprehensive survey on the techniques for human facial modeling and animation. The survey is carried out from two different perspectives: facial modeling, which concerns how to produce 3D face models, and facial animation, which regards how to synthesize dynamic facial expressions. To generate an individual face model, we can either perform individualization of a generic model or combine face models from an existing face collection. With respect to facial animation, we have further categorized the techniques into simulation-based, performance-driven and shape blend-based approaches. The strength and weakness of these techniques within each category are discussed, alongside with the applications of these techniques to various exploitations. In addition, a brief historical review of the technique evolution is provided. Limitations and future trend are discussed. Conclusions are drawn at the end of the paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号