首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents the quantitative and qualitative findings from an experiment designed to evaluate a developing model of affective postures for full-body virtual characters in immersive virtual environments (IVEs). Forty-nine participants were each requested to explore a virtual environment by asking two virtual characters for instructions. The participants used a CAVE-like system to explore the environment. Participant responses and their impression of the virtual characters were evaluated through a wide variety of both quantitative and qualitative methods. Combining a controlled experimental approach with various data-collection methods provided a number of advantages such as providing a reason to the quantitative results. The quantitative results indicate that posture plays an important role in the communication of affect by virtual characters. The qualitative findings indicated that participants attribute a variety of psychological states to the behavioral cues displayed by virtual characters. In addition, participants tended to interpret the social context portrayed by the virtual characters in a holistic manner. This suggests that one aspect of the virtual scene colors the perception of the whole social context portrayed by the virtual characters. We conclude by discussing the importance of designing holistically congruent virtual characters especially in immersive settings.  相似文献   

2.
The ever growing use of virtual environments requires more and more engaging elements for enhancing user experiences. Specifically regarding sounding virtual environments, one promising option to achieve such realism and interactivity requirements is the use of virtual characters interacting with sounding objects. In this paper, we focus as a case study on virtual characters playing virtual music instruments. We address more specially the real-time motion control and interaction of virtual characters with their sounding environment for proposing engaging and compelling virtual music performances. Combining physics-based simulation with motion data is a recent approach to finely represent and modulate this motion-sound interaction, while keeping the realism and expressivity of the original captured motion. We propose a physically-enabled environment in which a virtual percussionist interacts with a physics-based sound synthesis algorithm. We introduce and extensively evaluate the Hybrid Inverse Motion Control (HIMC), a motion-driven hybrid control scheme dedicated to the synthesis of upper-body percussion movements. We also propose a physics-based sound synthesis model with which the virtual character can interact. Finally, we present an architecture offering an effective way to manage heterogenous data (motion and sound parameters) and feedback (visual and sound) that influence the resulting virtual percussion performances.  相似文献   

3.
It is now possible to capture the 3D motion of the human body on consumer hardware and to puppet in real time skeleton‐based virtual characters. However, many characters do not have humanoid skeletons. Characters such as spiders and caterpillars do not have boned skeletons at all, and these characters have very different shapes and motions. In general, character control under arbitrary shape and motion transformations is unsolved ‐ how might these motions be mapped? We control characters with a method which avoids the rigging‐skinning pipeline — source and target characters do not have skeletons or rigs. We use interactively‐defined sparse pose correspondences to learn a mapping between arbitrary 3D point source sequences and mesh target sequences. Then, we puppet the target character in real time. We demonstrate the versatility of our method through results on diverse virtual characters with different input motion controllers. Our method provides a fast, flexible, and intuitive interface for arbitrary motion mapping which provides new ways to control characters for real‐time animation.  相似文献   

4.
We present a practical method to synthesize plausible 3D facial expressions for a particular target subject. The ability to synthesize an entire facial rig from a single neutral expression has a large range of applications both in computer graphics and computer vision, ranging from the efficient and cost-effective creation of CG characters to scalable data generation for machine learning purposes. Unlike previous methods based on multilinear models, the proposed approach is capable to extrapolate well outside the sample pool, which allows it to plausibly predict the identity of the target subject and create artifact free expression shapes while requiring only a small input dataset. We introduce global-local multilinear models that leverage the strengths of expression-specific and identity-specific local models combined with coarse motion estimations from a global model. Experimental results show that we achieve high-quality, plausible facial expression synthesis results for an individual that outperform existing methods both quantitatively and qualitatively.  相似文献   

5.
Animated virtual human characters are a common feature in interactive graphical applications, such as computer and video games, online virtual worlds and simulations. Due to dynamic nature of such applications, character animation must be responsive and controllable in addition to looking as realistic and natural as possible. Though procedural and physics-based animation provide a great amount of control over motion, they still look too unnatural to be of use in all but a few specific scenarios, which is why interactive applications nowadays still rely mainly on recorded and hand-crafted motion clips. The challenge faced by animation system designers is to dynamically synthesize new, controllable motion by concatenating short motion segments into sequences of different actions or by parametrically blending clips that correspond to different variants of the same logical action. In this article, we provide an overview of research in the field of example-based motion synthesis for interactive applications. We present methods for automated creation of supporting data structures for motion synthesis and describe how they can be employed at run-time to generate motion that accurately accomplishes tasks specified by the AI or human user.  相似文献   

6.
The seamless integration of virtual characters into dynamic scenes captured by video is a challenging problem. In order to achieve consistent composite results, both the virtual and real characters must share the same geometrical constraints and their interactions must follow some common sense. One essential question is how to detect the motion of real objects—such as real characters moving in the video—and how to steer virtual characters accordingly to avoid unrealistic collisions. We propose an online solution. First, by analysis of the input video, the motion states of the real pedestrians are recovered into a common world 3D coordinate system. Meanwhile, a simplified accuracy measurement is defined to represent the confidence of the motion estimate. Then, under the constraints imposed by the real dynamic objects, the motion of virtual characters are accommodated by a uniform steering model. The final step is to merge virtual objects back to the real video scene by taking into account visibility and occlusion constraints between real foreground objects and virtual ones. Several examples demonstrate the efficiency of the proposed algorithm. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

7.
Automatic synthesis of realistic gestures promises to transform the fields of animation, avatars and communicative agents. In off-line applications, novel tools can alter the role of an animator to that of a director, who provides only high-level input for the desired animation; a learned network then translates these instructions into an appropriate sequence of body poses. In interactive scenarios, systems for generating natural animations on the fly are key to achieving believable and relatable characters. In this paper we address some of the core issues towards these ends. By adapting a deep learning-based motion synthesis method called MoGlow, we propose a new generative model for generating state-of-the-art realistic speech-driven gesticulation. Owing to the probabilistic nature of the approach, our model can produce a battery of different, yet plausible, gestures given the same input speech signal. Just like humans, this gives a rich natural variation of motion. We additionally demonstrate the ability to exert directorial control over the output style, such as gesture level, speed, symmetry and spacial extent. Such control can be leveraged to convey a desired character personality or mood. We achieve all this without any manual annotation of the data. User studies evaluating upper-body gesticulation confirm that the generated motions are natural and well match the input speech. Our method scores above all prior systems and baselines on these measures, and comes close to the ratings of the original recorded motions. We furthermore find that we can accurately control gesticulation styles without unnecessarily compromising perceived naturalness. Finally, we also demonstrate an application of the same method to full-body gesticulation, including the synthesis of stepping motion and stance.  相似文献   

8.
The human hand is a complex biological system able to perform numerous tasks with impressive accuracy and dexterity. Gestures furthermore play an important role in our daily interactions, and humans are particularly skilled at perceiving and interpreting detailed signals in communications. Creating believable hand motions for virtual characters is an important and challenging task. Many new methods have been proposed in the Computer Graphics community within the last years, and significant progress has been made towards creating convincing, detailed hand and finger motions. This state of the art report presents a review of the research in the area of hand and finger modeling and animation. Starting with the biological structure of the hand and its implications for how the hand moves, we discuss current methods in motion capturing hands, data‐driven and physics‐based algorithms to synthesize their motions, and techniques to make the appearance of the hand model surface more realistic. We then focus on areas in which detailed hand motions are crucial such as manipulation and communication. Our report concludes by describing emerging trends and applications for virtual hand animation.  相似文献   

9.
Recently,several important block ciphers are considered to be broken by the brute-force-like cryptanalysis,with a time complexity faster than the exhaustive key search by going over the entire key space but performing less than a full encryption for each possible key.Motivated by this observation,we describe a meetin-the-middle attack that can always be successfully mounted against any practical block ciphers with success probability one.The data complexity of this attack is the smallest according to the unicity distance.The time complexity can be written as 2k(1-),where>0 for all practical block ciphers.Previously,the security bound that is commonly accepted is the length k of the given master key.From our result we point out that actually this k-bit security is always overestimated and can never be reached because of the inevitable loss of the key bits.No amount of clever design can prevent it,but increments of the number of rounds can reduce this key loss as much as possible.We give more insight into the problem of the upper bound of effective key bits in block ciphers,and show a more accurate bound.A suggestion about the relationship between the key size and block size is given.That is,when the number of rounds is fixed,it is better to take a key size equal to the block size.Also,effective key bits of many well-known block ciphers are calculated and analyzed,which also confirms their lower security margins than thought before.The results in this article motivate us to reconsider the real complexity that a valid attack should compare to.  相似文献   

10.
Existing synthesis methods for closely interacting virtual characters relied on user‐specified constraints such as the reaching positions and the distance between body parts. In this paper, we present a novel method for synthesizing new interacting motion by composing two existing interacting motion samples without the need to specify the constraints manually. Our method automatically detects the type of interactions contained in the inputs and determines a suitable timing for the interaction composition by analyzing the spacetime relationships of the input characters. To preserve the features of the inputs in the synthesized interaction, the two inputs will be aligned and normalized according to the relative distance and orientation of the characters from the inputs. With a linear optimization method, the output is the optimal solution to preserve the close interaction of two characters and the local details of individual character behavior. The output animations demonstrated that our method is able to create interactions of new styles that combine the characteristics of the original inputs.  相似文献   

11.
This paper introduces an approach to performance animation that employs a small number of inertial measurement sensors to create an easy-to-use system for an interactive control of a full-body human character. Our key idea is to construct a global model from a prerecorded motion database and utilize them to construct full-body human motion in a maximum a posteriori framework (MAP). We have demonstrated the effectiveness of our system by controlling a variety of human actions, such as boxing, golf swinging, and table tennis, in real time. One unique property of our system is its ability to learn priors from a large and heterogeneous motion capture database and use them to generate a wide range of natural poses, a?capacity that has not been demonstrated in previous data-driven character posing systems.  相似文献   

12.
Applying motion‐capture data to multi‐person interaction between virtual characters is challenging because one needs to preserve the interaction semantics while also satisfying the general requirements of motion retargeting, such as preventing penetration and preserving naturalness. An efficient means of representing interaction semantics is by defining the spatial relationships between the body parts of characters. However, existing methods consider only the character skeleton and thus are not suitable for capturing skin‐level spatial relationships. This paper proposes a novel method for retargeting interaction motions with respect to character skins. Specifically, we introduce the aura mesh, which is a volumetric mesh that surrounds a character's skin. The spatial relationships between two characters are computed from the overlap of the skin mesh of one character and the aura mesh of the other, and then the interaction motion retargeting is achieved by preserving the spatial relationships as much as possible while satisfying other constraints. We show the effectiveness of our method through a number of experiments.  相似文献   

13.
Creating dynamic virtual environments consisting of humans interacting with objects is a fundamental problem in computer graphics. While it is well‐accepted that agent interactions play an essential role in synthesizing such scenes, most extant techniques exclusively focus on static scenes, leaving the dynamic component out. In this paper, we present a generative model to synthesize plausible multi‐step dynamic human‐object interactions. Generating multi‐step interactions is challenging since the space of such interactions is exponential in the number of objects, activities, and time steps. We propose to handle this combinatorial complexity by learning a lower dimensional space of plausible human‐object interactions. We use action plots to represent interactions as a sequence of discrete actions along with the participating objects and their states. To build action plots, we present an automatic method that uses state‐of‐the‐art computer vision techniques on RGB videos in order to detect individual objects and their states, extract the involved hands, and recognize the actions performed. The action plots are built from observing videos of everyday activities and are used to train a generative model based on a Recurrent Neural Network (RNN). The network learns the causal dependencies and constraints between individual actions and can be used to generate novel and diverse multi‐step human‐object interactions. Our representation and generative model allows new capabilities in a variety of applications such as interaction prediction, animation synthesis, and motion planning for a real robotic agent.  相似文献   

14.
Augmented reality is a growing field, with many diverse applications ranging from TV and film production, to industrial maintenance, medicine, education, entertainment and games. The central idea is to add virtual objects into a real scene, either by displaying them in a see-through head-mounted display, or by superimposing them on an image of the scene captured by a camera. Depending on the application, the added objects might be virtual characters in a TV or film production, instructions for repairing a car engine, or a reconstruction of an archaeological site. For the effect to be believable, the virtual objects must appear rigidly fixed to the real world, which requires the accurate measurement in real-time of the position of the camera or the user’s head. Present technology cannot achieve this without resorting to systems that require a significant infrastructure in the operating environment, severely restricting the range of possible applications.  相似文献   

15.
We propose a learning-based approach for full-body pose reconstruction from extremely sparse upper body tracking data, obtained from a virtual reality (VR) device. We leverage a conditional variational autoencoder with gated recurrent units to synthesize plausible and temporally coherent motions from 4-point tracking (head, hands, and waist positions and orientations). To avoid synthesizing implausible poses, we propose a novel sample selection and interpolation strategy along with an anomaly detection algorithm. Specifically, we monitor the quality of our generated poses using the anomaly detection algorithm and smoothly transition to better samples when the quality falls below a statistically defined threshold. Moreover, we demonstrate that our sample selection and interpolation method can be used for other applications, such as target hitting and collision avoidance, where the generated motions should adhere to the constraints of the virtual environment. Our system is lightweight, operates in real-time, and is able to produce temporally coherent and realistic motions.  相似文献   

16.
Sensing gloves are often used as an input device for virtual 3D games. We propose a new method to control characters such as humans or animals in real‐time by using sensing gloves. Based on existing motion data of the body, a new method to map the hand motion of the user to the locomotion of 3D characters in real‐time is proposed. The method was applied to control locomotion of characters such as humans or dogs. Various motions such as trotting, running, hopping, and turning could be produced. As the computational cost needed for our method is low, the response of the system is short enough to satisfy the real‐time requirements that are essential to be used for games. Using our method, users can directly control their characters intuitively and precisely than previous controlling devices such as mouse, keyboards or joysticks. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

17.
18.
逼真性是虚拟现实的重要特性,也是虚拟现实研究的难点。基于物理的图形建模是实观虚拟环境中物体逼真运动的基本方法,它构建动画角色的动力学模型并通过仿真计算它们的运动。该方法在视频制作、虚拟仿真、虚拟漫游等领域中具有重要的应用价值。与其它运动表示方法相比,可以实现虚拟环境中多种更为逼真的物体运动效果。该文分别以不变形物体(刚体)和变性物体(柔体)为例,介绍了虚拟现实系统中实现逼真物体运动的基于物理的建模思想,分析了它们的特点以及各自存在的问题和局限性。最后指出了要想理想解决物体建模问题所必须进一步研究和突破的若干技术。  相似文献   

19.
虚拟人运动的可控性和逼真性是虚拟现实应用中追求的重要目标,为了实现对虚拟人的灵活控制,合成出逼真的运动序列,提出了基于参数化运动合成的运动图方法.运动图的结点中存储了具有明确含义的控制参数,通过改变控制参数合成不同的运动片段,可以实现对虚拟人的灵活控制;提出了一种改进的运动融合方法,运动图的边将不同运动片段进行融合,有效的避免了脚步滑动和根关节朝向抖动的产生.根据用户对交互控制和路径轨迹的不同应用需求设计实验,实验结果表明,方法不仅具有较高的控制精度,而且合成的运动序列逼真自然.  相似文献   

20.
Continuous constrained optimization is a powerful tool for synthesizing novel human motion segments that are short. Graph‐based motion synthesis methods such as motion graphs and move trees are popular ways to synthesize long motions by playing back a sequence of existing motion segments. However, motion graphs only support transitions between similar frames, and move trees only support transitions between the end of one motion segment and the start of another. In this paper, we introduce an optimization‐based graph that combines continuous constrained optimization with graph‐based motion synthesis. The constrained optimization is used to create a vast number of complex realistic‐looking transitions in the graph. The graph can then be used to synthesize long motions with non‐trivial transitions that for example allow the character to switch its behavior abruptly while retaining motion naturalness. We also propose to build this graph semi‐autonomously by requiring a user to classify generated transitions as acceptable or not and explicitly minimizing the amount of required classifications. This process guarantees the quality consistency of the optimization‐based graph at the cost of limited user involvement.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号