首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Behavioral and cognitive modeling for virtual characters is a promising field. It significantly reduces the workload on the animator, allowing characters to act autonomously in a believable fashion. It also makes interactivity between humans and virtual characters more practical than ever before. In this paper we present a novel technique where an artificial neural network is used to approximate a cognitive model. This allows us to execute the model much more quickly, making cognitively empowered characters more practical for interactive applications. Through this approach, we can animate several thousand intelligent characters in real time on a PC. We also present a novel technique for how a virtual character, instead of using an explicit model supplied by the user, can automatically learn an unknown behavioral/cognitive model by itself through reinforcement learning. The ability to learn without an explicit model appears promising for helping behavioral and cognitive modeling become more broadly accepted and used in the computer graphics community, as it can further reduce the workload on the animator. Further, it provides solutions for problems that cannot easily be modeled explicitly. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

2.
Virtual humans: thirty years of research, what next?   总被引:2,自引:0,他引:2  
In this paper, we present research results and future challenges in creating realistic and believable Virtual Humans. To realize these modeling goals, real-time realistic representation is essential, but we also need interactive and perceptive Virtual Humans to populate the Virtual Worlds. Three levels of modeling should be considered to create these believable Virtual Humans: 1) realistic appearance modeling, 2) realistic, smooth and flexible motion modeling, and 3) realistic high-level behaviors modeling. At first, the issues of creating virtual humans with better skeleton and realistic deformable bodies are illustrated. To give a level of believable behavior, challenges are laid on generating on the fly flexible motion and complex behaviours of Virtual Humans inside their environments using a realistic perception of the environment. Interactivity and group behaviours are also important parameters to create believable Virtual Humans which have challenges in creating believable relationship between real and virtual humans based on emotion and personality, and simulating realistic and believable behaviors of groups and crowds. Finally, issues in generating realistic virtual clothed and haired people are presented.  相似文献   

3.
Driving simulation: challenges for VR technology   总被引:3,自引:0,他引:3  
Virtual driving environments represent a challenging test for virtual reality technology. We present an overview of our work on the problems of scenario and scene modeling for virtual environments (VEs) in the context of the Iowa Driving Simulator (IDS). The requirements of driving simulation-a deterministic real-time software system that integrates components for user interaction, simulation, and scenario and scene modeling-make it a valuable proving ground for VE technologies. The goal of our research is not simply to improve driving simulation, but to develop technology that benefits a wide variety of VE applications. For example, our work on authoring high-fidelity VE databases and on directable scenarios populated with believable agents also targets applications involving interaction with simulated, walking humans and training in the operation of complex machinery. This work has benefited greatly from the experience of developing components for a full-scale operational VE system like IDS, and we believe that many other proposed VE technologies would similarly benefit from such real-world testing  相似文献   

4.
During scenario-based training, the scenario is dynamically adapted in real time to control the storyline and increase its effectiveness. A team of experienced staff members is required to manage and perform the adaptations. They manipulate the storyline and the level of support during their role-play of important characters in the scenario. The costs of training could be reduced if the adaptation is automated by using intelligent agent technology to control the characters within a virtual training environment (a serious game). However, such a system also needs a didactical component to monitor the trainee and determine necessary adaptations to the scenario. This paper investigates the automation of didactical knowledge and the corresponding dynamic adaptation of the scenario. A so-called director decides upon and distributes the necessary changes in real-time to the characters. First, the nature and goals of the adaptations are analyzed. Subsequently, the paper introduces a conducted study into the applicability of directable scenarios. Thereafter, an experiment is introduced that investigates the effects of directorial interventions upon the instructive quality of the scenario. Qualitative results indicated that trainees experienced scenario-based training to be instructive and motivating. Moreover, quantitative results showed that instructors rated directed scenarios as significantly better attuned to the trainee's needs compared to non-directed scenarios. Our future research will focus at the design of an architecture for automatically directed scenario-based training.  相似文献   

5.
We present a data‐driven method for the real‐time synthesis of believable steering behaviours for virtual crowds. The proposed method interlinks the input examples into a structure we call the perception‐action graph (PAG) which can be used at run‐time to efficiently synthesize believable virtual crowds. A virtual character's state is encoded using a temporal representation, the Temporal Perception Pattern (TPP). The graph nodes store groups of similar TPPs whereas edges connecting the nodes store actions (trajectories) that were partially responsible for the transformation between the TPPs. The proposed method is being tested on various scenarios using different input data and compared against a nearest neighbours approach which is commonly employed in other data‐driven crowd simulation systems. The results show up to an order of magnitude speed‐up with similar or better simulation quality.  相似文献   

6.
Augmented reality is a growing field, with many diverse applications ranging from TV and film production, to industrial maintenance, medicine, education, entertainment and games. The central idea is to add virtual objects into a real scene, either by displaying them in a see-through head-mounted display, or by superimposing them on an image of the scene captured by a camera. Depending on the application, the added objects might be virtual characters in a TV or film production, instructions for repairing a car engine, or a reconstruction of an archaeological site. For the effect to be believable, the virtual objects must appear rigidly fixed to the real world, which requires the accurate measurement in real-time of the position of the camera or the user’s head. Present technology cannot achieve this without resorting to systems that require a significant infrastructure in the operating environment, severely restricting the range of possible applications.  相似文献   

7.
The seamless integration of virtual characters into dynamic scenes captured by video is a challenging problem. In order to achieve consistent composite results, both the virtual and real characters must share the same geometrical constraints and their interactions must follow some common sense. One essential question is how to detect the motion of real objects—such as real characters moving in the video—and how to steer virtual characters accordingly to avoid unrealistic collisions. We propose an online solution. First, by analysis of the input video, the motion states of the real pedestrians are recovered into a common world 3D coordinate system. Meanwhile, a simplified accuracy measurement is defined to represent the confidence of the motion estimate. Then, under the constraints imposed by the real dynamic objects, the motion of virtual characters are accommodated by a uniform steering model. The final step is to merge virtual objects back to the real video scene by taking into account visibility and occlusion constraints between real foreground objects and virtual ones. Several examples demonstrate the efficiency of the proposed algorithm. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

8.
基于3D计算机触觉视觉交互(CHAI3D)和开放图形库(Open GL)等开源软件,设计了针对上颌骨复位手术的仿真系统。使用真实病例的CT图像搭建虚拟场景,通过Geomagic力反馈设备对虚拟模型进行三维操作并输出触觉反馈。在原有单点碰撞算法的基础上,提出了使用多个中介代理的多点碰撞算法,避免了虚拟手术工具的手柄插入虚拟器官的不实仿真;通过力反馈设备对头颅骨模型进行选择、移动和旋转,模拟手术中对头颅骨的移动和放置。系统可用于训练医学院学生,也可用于复杂手术的术前规划。  相似文献   

9.
The common wisdom is that the capacity of parallel channels is usually additive. This was also conjectured by Shannon for the zero-error capacity function, which was later disproved by constructing explicit counterexamples demonstrating the zero-error capacity to be super-additive. Despite these explicit examples for the zero-error capacity, there is surprisingly little known for nontrivial channels. This paper addresses this question for the arbitrarily varying channel (AVC) under list decoding by developing a complete theory. The list capacity function is studied and shown to be discontinuous, and the corresponding discontinuity points are characterized for all possible list sizes. For parallel AVCs it is then shown that the list capacity is super-additive, implying that joint encoding and decoding for two parallel AVCs can yield a larger list capacity than independent processing of both channels. This discrepancy is shown to be arbitrarily large. Furthermore, the developed theory is applied to the arbitrarily varying wiretap channel to address the scenario of secure communication over AVCs.  相似文献   

10.
Communicative behaviors are a very important aspect of human behavior and deserve special attention when simulating groups and crowds of virtual pedestrians. Previous approaches have tended to focus on generating believable gestures for individual characters and talker‐listener behaviors for static groups. In this paper, we consider the problem of creating rich and varied conversational behaviors for data‐driven animation of walking and jogging characters. We captured ground truth data of participants conversing in pairs while walking and jogging. Our stylized splicing method takes as input a motion captured standing gesture performance and a set of looped full body locomotion clips. Guided by the ground truth metrics, we perform stylized splicing and synchronization of gesture with locomotion to produce natural conversations of characters in motion. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
基于Markov决策过程的交互虚拟人情感计算模型   总被引:1,自引:0,他引:1  
情感在生物体的交流和适应性方面起到了关键作用。同样,交互虚拟人也需要有恰如其分的表达情感的能力。由于具有情感交互能力的虚拟人在虚拟现实、电子教育、娱乐等领域均有着广阔的应用前景,当前,在虚拟人中加入情感成分的研究受到了越来越多的重视。本文提出了一个人工心理的情感计算模型,模型用马尔可夫过程来描述情感的变化过程,并且使用马尔可夫决策过程建立了情感、个性与环境之间的联系,并且我们把该模型应用到了一个交互虚拟人系统中。研究结果表明,模型能够构建具有不同性格特征的虚拟人,使之产生较为自然的情感反应。  相似文献   

12.
This paper presents an innovative 3D reconstruction of ancient fresco paintings through the real‐time revival of their fauna and flora, featuring groups of virtual animated characters with artificial‐life dramaturgical behaviours in an immersive, fully mobile augmented reality (AR) environment. The main goal is to push the limits of current AR and virtual storytelling technologies and to explore the processes of mixed narrative design of fictional spaces (e.g. fresco paintings) where visitors can experience a high degree of realistic immersion. Based on a captured/real‐time video sequence of the real scene in a video‐see‐through HMD set‐up, these scenes are enhanced by the seamless accurate real‐time registration and 3D rendering of realistic complete simulations of virtual flora and fauna (virtual humans and plants) in a real‐time storytelling scenario‐based environment. Thus the visitor of the ancient site is presented with an immersive and innovative multi‐sensory interactive trip to the past. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

13.
14.
This paper studies the design and application of a novel visual attention model designed to compute user's gaze position automatically, i.e., without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute in real time a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive processes which take place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines both bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with sometimes more than 100 percent of accuracy gained. This suggests that computing a gaze point in a 3D virtual environment in real time is possible and is a valid approach, compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multiple-texture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high-refresh rate. Second, we introduce the use of the visual attention model in three visual effects inspired by the human visual system namely: depth-of-field blur, camera- motions, and dynamic luminance. All these effects are computed based on the simulated gaze of the user, and are meant to improve user's sensations in future virtual reality applications.  相似文献   

15.
对战斗过程的仿真是虚拟海战场环境的核心,是提供符合实战背景战术想定的基础。该文以俯冲轰炸航迹仿真模型和基于粒子系统的爆烟模型为例,对虚拟海战场环境战斗过程进行了模型描述。解决了虚拟海战场环境中战斗过程仿真的关键性问题。  相似文献   

16.
Virtual simulation of the real behaviour of mobile harbour crane (MHC) without using the traditional build-and-test method is an imperative approach to the design stage that can increase the quality of the product by reducing manufacturing cost and errors. This paper introduces an engineering model that describes the mechanical behaviour of MHC, and the control design for increasing the position accuracy. Based on a concept of the MHC, a virtual mechanical model was created using SOLIDWOKS, which was then exported to the Automatic Dynamic Analysis of Mechanical System (ADAMS) environment. This model was simulated to investigate the dynamic behaviour of the MHC system. In addition, an adaptive siding mode PID controller was also developed in MATLAB/Simulink to control the crane trolley position and suppress the swing angle of the load. This co-simulation demonstrates the reliability of the mechanical and control functionalities of the developed system.  相似文献   

17.
有较好表现能力、反应式的适度智能主体逐渐成为构建动画自动生成系统中虚拟角色的一种主流技术。文章从智能主体角度出发,建立一个具备反应能力的半自治的智能虚拟角色,能够感知自身和周围角色的行为状态,自主时周围环境做出反应,期望为动画自动生成提供更具智能性和真实性,行为更加丰富的虚拟角色。  相似文献   

18.
目的 建立行为可信的虚拟角色能够使严肃游戏更加有趣,提升用户使用的体验感。尽管严肃游戏的图形渲染技术已经日趋成熟,但现有的虚拟角色行为表现方式多采用确定性模型,很难反映虚拟角色行为表现的多样性。方法 本文构建了符合辅助社交训练的严肃游戏剧情,采用智能体来描述虚拟角色,赋予虚拟角色视觉、听觉双通道感知。基于马斯洛动机理论,采用食物、休息、交流和安全等动机来描述情绪的产生,利用大五(OCEAN)个性模型来描述虚拟角色的不同个性差别。用外部刺激和内部动机需求来计算情绪强度,利用行为树描述虚拟角色的行为。运用正态云模型处理虚拟角色行为表现的不确定性,并以行走方向、社交距离、交流时身体朝向3个典型的行为表现给出了具体处理方法。结果 在所实现的游戏原型系统中,对于虚拟角色的自主行为和行为表现的不确定性进行了用户体验测试。结果表明,在场景探索任务中,虚拟角色的自主行为模型能减少用户探索场景所耗费的时间,并且可以促进用户与虚拟角色交流;在行为表现测试中,本文模型的自然性评价要高于确定性模型。结论 本文虚拟角色行为模型在一定程度上可提升用户的体验感,有望为建立行为可信的虚拟角色提供一种新的途径。  相似文献   

19.
Embodied identity, that is, who we are as a result of our interactions with the world around us with and through our bodies, is increasingly challenged in online environments where identity performances are seemingly untethered from the user's body that is sitting at the computer. Even though disembodiment has been severely criticized in the literature, most conceptualizations of the role of users’ bodies in virtuality nevertheless reflect a representational logic, which fails to capture contemporary users’ experience of cyborgism. Relying on data collected from nine entrepreneurs in the virtual world Second Life (SL), this paper asks how embodied identity is performed in virtual worlds. Contrasting representationalism with performativity, this study highlights that the SL entrepreneurs intentionally re-presented in their avatars some of the attributes of physical bodies, but that they also engaged in habitual practices in-world, thereby unconsciously enacting embodied identities in both their ‘real’ and virtual lives.  相似文献   

20.
基于MSC Adams/Car的空气悬架及整车仿真   总被引:1,自引:0,他引:1  
利用MSCAdams/Car软件建立载货汽车后空气悬架系统多体动力学模型,仿真计算了悬架垂直刚度特性,证明模型的正确性.建立Adams/Car的整车模型,进行稳态回转试验的仿真分析.将仿真结果与道路试验结果进行比较,验证虚拟样机模型的正确性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号