共查询到19条相似文献,搜索用时 187 毫秒
1.
虚拟环境中的空间立体声生成算法研究 总被引:2,自引:0,他引:2
三维听觉信息可为虚拟环境中的人导航漫游以及监控、识别目标。该文对如何为人实时生成虚拟环境中的三维声音信息进行了探讨。提出了基于空间矢量的幅值相移算法,该算法可把声音定位在三维空间中的任意位置。从物理声学说明了增益系数规格化的必要性,并推导了其几何数学意义。对扬声器阵列的布局,声源运动、距离变化的仿真以及多声源的数字混音进行了阐述。虚拟战场中的空间立体声实时生成为一应用实例。 相似文献
2.
3.
4.
听觉通道是虚拟环境系统中最重要的接口之一,真实感声音模拟是构建高度真实感和沉浸感虚拟环境不可或缺的一部分。DirectSound对三维真实感声音模拟提供了很好的支持,是目前在真实感声音生成领域非常优秀的开发接口。该文简要介绍了真实感声音生成的相关理论,使用DirectSound模拟真实感声音涉及到的几个接口对象,详细论述了影响虚拟环境中三维声音听觉效果的因素。以及如何利用DirectSound程序开发接口实现这些因素对三维声音听觉效果的影响。最后给出了使用DirectSound生成三维真实感声音的具体步骤,并开发了一个简单的测试程序。 相似文献
5.
6.
7.
8.
9.
10.
针对近年来出现的一些群体性事件处置不当问题,本文设计了一种警用强声驱暴系统,介绍了警用强声驱暴系统的组成及其各个模块的作用,并基于换能器阵列原理,对号角扬声器阵列进行了指向性研究,为今后的实际应用奠定了理论基础。 相似文献
11.
By offering a natural, intuitive interface with the virtual world, auditory display can enhance a user's experience in a multimodal virtual environment and further improve the user's sense of presence. However, compared to graphical display, sound synthesis has not been well investigated because of the extremely high computational cost for simulating realistic sounds. The state of the art for sound production in virtual environments is to use recorded sound clips that events in the virtual environment trigger, similar to how recorded animation sequences in the earlier days generated all the character motion in the virtual world. In this article, we describe several techniques for accelerating sound simulation, thereby enabling realistic, physically based sound synthesis for large-scale virtual environments 相似文献
12.
逼真的三维(3D)声音效果是虚拟战场环境的一个重要组成部分,在三维声音的理论基础上,结合DirectSound重点分析了影响三维交互声音的各种属性,并探讨了三维声音生成过程中应如何对相关参数进行合理的设置,最后论述了声音和视景的同步渲染问题并提出了解决思路。 相似文献
13.
14.
环境声音作为日常生活中分布最为广泛的一类声音,是人们获取外部信息的重要来源.近十几年来,随着用户对虚拟场景真实度要求不断提升,为虚拟场景打造同步、真实的环境音效已成为构建高度沉浸式虚拟环境不可或缺的一部分.其中环境声源仿真作为打造真实感虚拟环境音效的基石,得到了研究人员的广泛关注与探索.与传统的人工声源仿真相比,通过算... 相似文献
15.
In this paper, we present our approach towards designing and implementing a virtual 3D sound sculpting interface that creates audiovisual results using hand motions in real time. In the interface “Virtual Pottery,” we use the metaphor of pottery creation in order to adopt the natural hand motions to 3D spatial sculpting. Users can create their own pottery pieces by changing the position of their hands in real time, and also generate 3D sound sculptures based on pre-existing rules of music composition. The interface of Virtual Pottery can be categorized by shape design and camera sensing type. This paper describes how we developed the two versions of Virtual Pottery and implemented the technical aspects of the interfaces. Additionally, we investigate the ways of translating hand motions into musical sound. The accuracy of the detection of hand motions is crucial for translating natural hand motions into virtual reality. According to the results of preliminary evaluations, the accuracy of both motion-capture tracking system and portable depth sensing camera is as high as the actual data. We carried out user studies, which took into account information about the two exhibitions along with the various ages of users. Overall, Virtual Pottery serves as a bridge between the virtual environment and traditional art practices, with the consequence that it can lead to the cultivation of the deep potential of virtual musical instruments and future art education programs. 相似文献
16.
Automated virtual camera control has been widely used in animation and interactive virtual environments. We have developed a multiple sparse camera based free view video system prototype that allows users to control the position and orientation of a virtual camera, enabling the observation of a real scene in three dimensions (3D) from any desired viewpoint. Automatic camera control can be activated to follow selected objects by the user. Our method combines a simple geometric model of the scene composed of planes (virtual environment), augmented with visual information from the cameras and pre-computed tracking information of moving targets to generate novel perspective corrected 3D views of the virtual camera and moving objects. To achieve real-time rendering performance, view-dependent textured mapped billboards are used to render the moving objects at their correct locations and foreground masks are used to remove the moving objects from the projected video streams. The current prototype runs on a PC with a common graphics card and can generate virtual 2D views from three cameras of resolution 768×576 with several moving objects at about 11 fps. 相似文献
17.
Most human-computer interactive systems focus primarily on the graphical rendering of visual information and, to a lesser extent, on the display of auditory information. Haptic interfaces have the potential to increase the quality of human-computer interaction by accommodating the sense of touch. They provide an attractive augmentation to visual display and enhance the level of understanding of complex data sets. A haptic rendering system generates contact or restoring forces to prevent penetration into the virtual objects and create a sense of touch. The system computes contact forces by first detecting if a collision or penetration has occurred. Then, the system determines the (projected) contact points on the model surface. Finally, it computes restoring forces based on the amount of penetration. Researchers have recently investigated the problem of rendering the contact forces and torques between 3D virtual objects. This problem is known as six-degrees-of-freedom (6-DOF) haptic rendering, as the computed output includes both 3-DOF forces and 3-DOF torques. This article presents an overview of our work in this area. We suggest different approximation methods based on the principle of preserving the dominant perceptual factors in haptic exploration. 相似文献
18.
针对培训系统要求人机交互界面好以及沉浸感强等特点,在钻床教学系统开发中采用实时3D建构工具Quest3D,提出了一种针对钻床教学培训的虚拟现实系统的开发方案,将虚拟现实系统分为三维显示模块,运动控制模块,场景交互模块和场景声控模块等,通过Quest3D的图形编程使得整个系统开发周期缩短,运行效率提高。成功将三维场景虚拟交互技术运用于钻床教学培训系统中。 相似文献
19.
Comparing effects of 2-D and 3-D visual cues during aurally aided target acquisition 总被引:1,自引:0,他引:1
The aim of the present study was to investigate interactions between vision and audition during a visual target acquisition task performed in a virtual environment. In two experiments, participants were required to perform an acquisition task guided by auditory and/or visual cues. In both experiments the auditory cues were constructed using virtual 3-D sound techniques based on nonindividualized head-related transfer functions. In Experiment 1 the visual cue was constructed in the form of a continuously updated 2-D arrow. In Experiment 2 the visual cue was a nonstereoscopic, perspective-based 3-D arrow. The results suggested that virtual spatial auditory cues reduced acquisition time but were not as effective as the virtual visual cues. Experiencing the 3-D perspective-based arrow rather than the 2-D arrow produced a faster acquisition time not only in the visually aided conditions but also when the auditory cues were presented in isolation. Suggested novel applications include providing 3-D nonstereoscopic, perspective-based visual information on radar displays, which may lead to a better integration with spatial virtual auditory information. 相似文献