首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 671 毫秒
1.
We introduce “Crowd Sculpting”: a method to interactively design populated environments by using intuitive deformation gestures to drive both the spatial coverage and the temporal sequencing of a crowd motion. Our approach assembles large environments from sets of spatial elements which contain inter‐connectible, periodic crowd animations. Such a “Crowd Patches” approach allows us to avoid expensive and difficult‐to‐control simulations. It also overcomes the limitations of motion editing, that would result into animations delimited in space and time. Our novel methods allows the user to control the crowd patches layout in ways inspired by elastic shape sculpting: the user creates and tunes the desired populated environment through stretching, bending, cutting and merging gestures, applied either in space or time. Our examples demonstrate that our method allows the space‐time editing of very large populations and results into endless animation, while offering real‐time, intuitive control and maintaining animation quality.  相似文献   

2.
3.
Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade‐off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control.  相似文献   

4.
Facial animation is a time‐consuming and cumbersome task that requires years of experience and/or a complex and expensive set‐up. This becomes an issue, especially when animating the multitude of secondary characters required, e.g. in films or video‐games. We address this problem with a novel technique that relies on motion graphs to represent a landmarked database. Separate graphs are created for different facial regions, allowing a reduced memory footprint compared to the original data. The common poses are identified using a Euclidean‐based similarity metric and merged into the same node. This process traditionally requires a manually chosen threshold, however, we simplify it by optimizing for the desired graph compression. Motion synthesis occurs by traversing the graph using Dijkstra's algorithm, and coherent noise is introduced by swapping some path nodes with their neighbours. Expression labels, extracted from the database, provide the control mechanism for animation. We present a way of creating facial animation with reduced input that automatically controls timing and pose detail. Our technique easily fits within video‐game and crowd animation contexts, allowing the characters to be more expressive with less effort. Furthermore, it provides a starting point for content creators aiming to bring more life into their characters.  相似文献   

5.
This paper proposes a new algorithm to produce globally coordinated crowds in an environment with multiple paths and obstacles. Simple greedy crowd control methods easily lead to congestion at bottlenecks within scenes, as the characters do not cooperate with one another. In computer animation, this problem degrades crowd quality especially when ordered behaviour is needed, such as soldiers marching towards a castle. Similarly, in applications such as real‐time strategy games, this often causes player frustration, as the crowd will not move as efficiently as it should. Also, planning of building would usually require visualization of ordered evacuation to maximize the flow. Planning such globally coordinated crowd movement is usually labour intensive. Here, we propose a simple solution that is easy to use and efficient in computation. First, we compute the harmonic field of the environment, taking into account the starting points, goals and obstacles. Based on the field, we represent the topology of the environment using a Reeb Graph, and calculate the maximum capacity for each path in the graph. With the harmonic field and the Reeb Graph, path planning of crowd can be performed using a lightweight algorithm, such that any blocking of one another's paths is minimized. Comparing to previous methods, our system can synthesize globally coordinated crowd with smooth and efficient movement. It also enables control of the crowd with high‐level parameters such as the degree of cooperation and congestion. Finally, the method is scalable to thousands of characters with minimal impact to computation time. It is best applied in interactive crowd synthesis systems such as animation designs and real‐time strategy games.  相似文献   

6.
Most recent crowd simulation algorithms equip agents with a synthetic vision component for steering. They offer promising perspectives through a more realistic simulation of the way humans navigate according to their perception of the surrounding environment. In this paper, we propose a new perception/motion loop to steering agents along collision free trajectories that significantly improves the quality of vision‐based crowd simulators. In contrast with solutions where agents avoid collisions in a purely reactive (binary) way, we suggest exploring the full range of possible adaptations and retaining the locally optimal one. To this end, we introduce a cost function, based on perceptual variables, which estimates an agent's situation considering both the risks of future collision and a desired destination. We then compute the partial derivatives of that function with respect to all possible motion adaptations. The agent then adapts its motion by following the gradient. This paper has thus two main contributions: the definition of a general purpose control scheme for steering synthetic vision‐based agents; and the proposition of cost functions for evaluating the perceived danger of the current situation. We demonstrate improvements in several cases.  相似文献   

7.
Representing motions as linear sums of principal components has become a widely accepted animation technique. While powerful, the simplest version of this approach is not particularly well suited to modeling the specific style of an individual whose motion had not yet been recorded when building the database: it would take an expert to adjust the PCA weights to obtain a motion style that is indistinguishable from his. Consequently, when realism is required, the current practice is to perform a full motion capture session each time a new person must be considered. In this paper, we extend the PCA approach so that this requirement can be drastically reduced: for whole classes of cyclic and noncyclic motions such as walking, running or jumping, it is enough to observe the newcomer moving only once at a particular speed or jumping a particular distance using either an optical motion capture system or a simple pair of synchronized video cameras. This one observation is used to compute a set of principal component weights that best approximates the motion and to extrapolate in real‐time realistic animations of the same person walking or running at different speeds, and jumping a different distance.  相似文献   

8.
Many data‐driven animation techniques are capable of producing high quality motions of human characters. Few techniques, however, are capable of generating motions that are consistent with physically simulated environments. Physically simulated characters, in contrast, are automatically consistent with the environment, but their motions are often unnatural because they are difficult to control. We present a model‐predictive controller that yields natural motions by guiding simulated humans toward real motion data. During simulation, the predictive component of the controller solves a quadratic program to compute the forces for a short window of time into the future. These forces are then applied by a low‐gain proportional‐derivative component, which makes minor adjustments until the next planning cycle. The controller is fast enough for interactive systems such as games and training simulations. It requires no precomputation and little manual tuning. The controller is resilient to mismatches between the character dynamics and the input motion, which allows it to track motion capture data even where the real dynamics are not known precisely. The same principled formulation can generate natural walks, runs, and jumps in a number of different physically simulated surroundings.  相似文献   

9.
In this paper, we propose an efficient data‐guided method based on Model Predictive Control (MPC) to synthesize a full‐body motion. Guided by a reference motion, our method repeatedly plans the full‐body motion to produce an optimal control policy for predictive control while sliding the fixed‐span window along the time axis. Based on this policy, the method computes the joint torques of a character at every time step. Together with contact forces and external perturbations if there are any, the joint torques are used to update the state of the character. Without including the contact forces in the control vector, our formulation of the trajectory optimization problem enables automatic adjustment of contact timings and positions for balancing in response to environmental changes and external perturbations. For efficiency, we adopt derivative‐based trajectory optimization on top of state‐of‐the‐art smoothed contact dynamics. Use of derivatives enables our method to run much faster than the existing sampling‐based methods. In order to further accelerate the performance of MPC, we propose efficient numerical differentiation of the system dynamics of a full‐body character based on two schemes: data reuse and data interpolation. The former scheme exploits data dependency to reuse physical quantities of the system dynamics at near‐by time points. The latter scheme allows the use of derivatives at sparse sample points to interpolate those at other time points in the window. We further accelerate evaluation of the system dynamics by exploiting the sparsity of physical quantities such as Jacobian matrix resulting from the tree‐like structure of the articulated body. Through experiments, we show that the proposed method efficiently can synthesize realistic motions such as locomotion, dancing, gymnastic motions, and martial arts at interactive rates using moderate computing resources.  相似文献   

10.
We propose a data‐driven method to realize high‐quality detailed hair animations in interactive applications like games. By devising an error metric method to evaluate hair animation similarities, we take hair features into consideration as much as possible. We also propose a novel database construction algorithm based on Secondary Motion Graph. Our algorithm can improve the efficiency of such graphs to reduce redundant data and also achieves visually smooth connection of two animation clips while taking into consideration their future motions. The costs for the run‐time process using our Secondary Motion Graph are relatively low, allowing real‐time interactive operations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
近些年来,群体动画在机器人学、电影、游戏等领域得到了广泛的研究和应用,但传统的群体动画技术均涉及复杂的运动规划或碰撞避免操作,计算效率较低.本文提出了一种基于马尔可夫决策过程(MDPs)的群体动画运动轨迹生成算法,该算法无需碰撞检测即可生成各智能体的无碰撞运动轨迹.同时本文还提出了一种改进的值迭代算法用于求解马尔可夫决策过程的状态-值,利用该算法在栅格环境中进行实验,结果表明该算法的计算效率明显高于使用欧氏距离作为启发式的值迭代算法和Dijkstra算法.利用本文提出的运动轨迹生成算法在三维(3D)动画场景中进行群体动画仿真实验,结果表明该算法可实现群体无碰撞地朝向目标运动,并具有多样性.  相似文献   

12.
We present a real‐time multi‐view facial capture system facilitated by synthetic training imagery. Our method is able to achieve high‐quality markerless facial performance capture in real‐time from multi‐view helmet camera data, employing an actor specific regressor. The regressor training is tailored to specified actor appearance and we further condition it for the expected illumination conditions and the physical capture rig by generating the training data synthetically. In order to leverage the information present in live imagery, which is typically provided by multiple cameras, we propose a novel multi‐view regression algorithm that uses multi‐dimensional random ferns. We show that higher quality can be achieved by regressing on multiple video streams than previous approaches that were designed to operate on only a single view. Furthermore, we evaluate possible camera placements and propose a novel camera configuration that allows to mount cameras outside the field of view of the actor, which is very beneficial as the cameras are then less of a distraction for the actor and allow for an unobstructed line of sight to the director and other actors. Our new real‐time facial capture approach has immediate application in on‐set virtual production, in particular with the ever‐growing demand for motion‐captured facial animation in visual effects and video games.  相似文献   

13.
Creating realistic human movement is a time consuming and labour intensive task. The major difficulty is that the user has to edit individual joints while maintaining an overall realistic and collision free posture. Previous research suggests the use of data‐driven inverse kinematics, such that one can focus on the control of a few joints, while the system automatically composes a natural posture. However, as a common problem of kinematics synthesis, penetration of body parts is difficult to avoid in complex movements. In this paper, we propose a new data‐driven inverse kinematics framework that conserves the topology of the synthesizing postures. Our system monitors and regulates the topology changes using the Gauss Linking Integral (GUI), such that penetration can be efficiently prevented. As a result, complex motions with tight body movements, as well as those involving interaction with external objects, can be simulated with minimal manual intervention. Experimental results show that using our system, the user can create high quality human motion in real‐time by controlling a few joints using a mouse or a multi‐touch screen. The movement generated is both realistic and penetration free. Our system is best applied for interactive motion design in computer animations and games.  相似文献   

14.
Sensing gloves are often used as an input device for virtual 3D games. We propose a new method to control characters such as humans or animals in real‐time by using sensing gloves. Based on existing motion data of the body, a new method to map the hand motion of the user to the locomotion of 3D characters in real‐time is proposed. The method was applied to control locomotion of characters such as humans or dogs. Various motions such as trotting, running, hopping, and turning could be produced. As the computational cost needed for our method is low, the response of the system is short enough to satisfy the real‐time requirements that are essential to be used for games. Using our method, users can directly control their characters intuitively and precisely than previous controlling devices such as mouse, keyboards or joysticks. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

15.
This paper presents an efficient technique for synthesizing motions by stitching, or splicing, an upper‐body motion retrieved from a motion space on top of an existing lower‐body locomotion of another motion. Compared to the standard motion splicing problem, motion space splicing imposes new challenges as both the upper and lower body motions might not be known in advance. Our technique is the first motion (space) splicing technique that propagates temporal and spatial properties of the lower‐body locomotion to the newly generated upper‐body motion and vice versa. Whereas existing techniques only adapt the upper‐body motion to fit the lower‐body motion, our technique also adapts the lower‐body locomotion based on the upper body task for a more coherent full‐body motion. In this paper, we will show that our decoupled approach is able to generate high‐fidelity full‐body motion for interactive applications such as games.  相似文献   

16.
17.
Automatic synthesis of realistic gestures promises to transform the fields of animation, avatars and communicative agents. In off-line applications, novel tools can alter the role of an animator to that of a director, who provides only high-level input for the desired animation; a learned network then translates these instructions into an appropriate sequence of body poses. In interactive scenarios, systems for generating natural animations on the fly are key to achieving believable and relatable characters. In this paper we address some of the core issues towards these ends. By adapting a deep learning-based motion synthesis method called MoGlow, we propose a new generative model for generating state-of-the-art realistic speech-driven gesticulation. Owing to the probabilistic nature of the approach, our model can produce a battery of different, yet plausible, gestures given the same input speech signal. Just like humans, this gives a rich natural variation of motion. We additionally demonstrate the ability to exert directorial control over the output style, such as gesture level, speed, symmetry and spacial extent. Such control can be leveraged to convey a desired character personality or mood. We achieve all this without any manual annotation of the data. User studies evaluating upper-body gesticulation confirm that the generated motions are natural and well match the input speech. Our method scores above all prior systems and baselines on these measures, and comes close to the ratings of the original recorded motions. We furthermore find that we can accurately control gesticulation styles without unnecessarily compromising perceived naturalness. Finally, we also demonstrate an application of the same method to full-body gesticulation, including the synthesis of stepping motion and stance.  相似文献   

18.
We propose a design framework to assist with user‐generated content in facial animation — without requiring any animation experience or ground truth reference. Where conventional prototyping methods rely on handcrafting by experienced animators, our approach looks to encode the role of the animator as an Evolutionary Algorithm acting on animation controls, driven by visual feedback from a user. Presented as a simple interface, users sample control combinations and select favourable results to influence later sampling. Over multiple iterations of disregarding unfavourable control values, parameters converge towards the user's ideal. We demonstrate our framework through two non‐trivial applications: creating highly nuanced expressions by evolving control values of a face rig and non‐linear motion through evolving control point positions of animation curves.  相似文献   

19.
We present a method to accelerate the visualization of large crowds of animated characters. Linear‐blend skinning remains the dominant approach for animating a crowd but its efficiency can be improved by utilizing the temporal and intra‐crowd coherencies that are inherent within a populated scene. Our work adopts a caching system that enables a skinned key‐pose to be re‐used by multi‐pass rendering, between multiple agents and across multiple frames. We investigate two different methods; an intermittent caching scheme (whereby each member of a crowd is animated using only its nearest key‐pose) and an interpolative approach that enables key‐pose blending to be supported. For the latter case, we show that finding the optimal set of key‐poses to store is an NP‐hard problem and present a greedy algorithm suitable for real‐time applications. Both variants deliver a worthwhile performance improvement in comparison to using linear‐blend skinning alone.  相似文献   

20.
Motion capture is a technique of digitally recording the movements of real entities, usually humans. It was originally developed as an analysis tool in biomechanics research, but has grown increasingly important as a source of motion data for computer animation. In this context it has been widely used for both cinema and video games. Hand motion capture and tracking in particular has received a lot of attention because of its critical role in the design of new Human Computer Interaction methods and gesture analysis. One of the main difficulties is the capture of human hand motion. This paper gives an overview of ongoing research “HandPuppet3D” being carried out in collaboration with an animation studio to employ computer vision techniques to develop a prototype desktop system and associated animation process that will allow an animator to control 3D character animation through the use of hand gestures. The eventual goal of the project is to support existing practice by providing a softer, more intuitive, user interface for the animator that improves the productivity of the animation workflow and the quality of the resulting animations. To help achieve this goal the focus has been placed on developing a prototype camera based desktop gesture capture system to capture hand gestures and interpret them in order to generate and control the animation of 3D character models. This will allow an animator to control 3D character animation through the capture and interpretation of hand gestures. Methods will be discussed for motion tracking and capture in 3D animation and in particular that of hand motion tracking and capture. HandPuppet3D aims to enable gesture capture with interpretation of the captured gestures and control of the target 3D animation software. This involves development and testing of a motion analysis system built from algorithms recently developed. We review current software and research methods available in this area and describe our current work.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号