首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Head-mounted displays (HMDs) allow users to immerse in a virtual environment (VE) in which the user’s viewpoint can be changed according to the tracked movements in real space. Because the size of the virtual world often differs from the size of the tracked lab space, a straightforward implementation of omni-directional and unlimited walking is not generally possible. In this article we review and discuss a set of techniques that use known perceptual limitations and illusions to support seemingly natural walking through a large virtual environment in a confined lab space. The concept behind these techniques is called redirected walking. With redirected walking, users are guided unnoticeably on a physical path that differs from the path the user perceives in the virtual world by manipulating the transformations from real to virtual movements. For example, virtually rotating the view in the HMD to one side with every step causes the user to unknowingly compensate by walking a circular arc in the opposite direction, while having the illusion of walking on a straight trajectory. We describe a number of perceptual illusions that exploit perceptual limitations of motion detectors to manipulate the user’s perception of the speed and direction of his motion. We describe how gains of locomotor speed, rotation, and curvature can gradually alter the physical trajectory without the users observing any discrepancy, and discuss studies that investigated perceptual thresholds for these manipulations. We discuss the potential of self-motion illusions to shift or widen the applicable ranges for gain manipulations and to compensate for over- or underestimations of speed or travel distance in VEs. Finally, we identify a number of key issues for future research on this topic.  相似文献   

2.
Previous research identified that learning assembly tasks in Virtual Environments (VEs) is more difficult than in Real Environments (REs). This work's objective is to identify the key visual areas for both environments when performing an assembly task for ten consecutive cycles, when following visual instructions and when having visual distractors. Using an eye-tracker, we identified the key visual areas required for an assembly task in both environments. Results indicate that practice allowed participants to reduce their assembly time in both environments. They also indicate that two areas, Assembly Area and Blocks, concentrated a higher proportion of eye-fixations; 59.98% for REs and 81.48% for VEs, with a statistically significant observation difference between environments (t-test value = −14.23, with p-value <0.00001 and Cohen's d = 6.36). We conclude that participants considered the same key visual areas for both environments and that VE interaction has a significant role in observation behavior.  相似文献   

3.
ABSTRACT

In perceiving a virtual environment (VE), distance perception is important in that it affects users’ behaviors and interactions. This study addresses an on-going issue of virtual reality (VR), distance underestimation within a VE. For this purpose, a 2x2x2x5 mixed design was used with two between-subject variables [visual cue for perception (provided and non-provided) and visual cue for action (provided and non-provided)] and two within-subject variables [response measure (verbal estimation and triangulated blind walking) and distance (1 m, 5 m, 10 m, 20 m, and 35 m)]. Sixty-four undergraduate or graduate students participated in the experiment, and a cube was chosen as the target object. The experimental results showed that visual cues for action mitigated distance underestimation and improved the accuracy of perceived distances. In general, distance underestimation was more obvious in the triangulated blind walking than the verbal estimation, and in the action/vista space than personal space, with relatively lower accuracies. Meanwhile, when visual cue for action were provided, the degree of underestimation decreased significantly with higher accuracy particularly in the triangulated blind walking. Although overestimation was found across all the distances in the verbal estimation, overestimation was found only for the personal distance, and underestimation was found for the action/vista distances in the triangulated blind walking. In both response measures, the accuracy of distance perceptions were the worst in the 10 m condition. These findings can be expected to provide insight into the problem of distance underestimation and can help guide design of VEs.  相似文献   

4.
Redirected Free Exploration with Distractors (RFEDs) is a large-scale real-walking locomotion interface developed to enable people to walk freely in Virtual Environments (VEs) that are larger than the tracked space in their facility. This paper describes the RFED system in detail and reports on a user study that evaluated RFED by comparing it to Walking-in-Place (WIP) and Joystick (JS) interfaces. The RFED system is composed of two major components, redirection and distractors. This paper discusses design challenges, implementation details, and lessons learned during the development of two working RFED systems. The evaluation study examined the effect of the locomotion interface on users' cognitive performance on navigation and wayfinding measures. The results suggest that participants using RFED were significantly better at navigating and wayfinding through virtual mazes than participants using walking-in-place and joystick interfaces. Participants traveled shorter distances, made fewer wrong turns, pointed to hidden targets more accurately and more quickly, and were able to place and label targets on maps more accurately, and more accurately estimate the virtual environment size.  相似文献   

5.
In this paper, we describe the results of an experimental study whose objective was twofold: (1) comparing three navigation aids that help users perform wayfinding tasks in desktop virtual environments (VEs) by pointing out the location of objects or places; (2) evaluating the effects of user experience with 3D desktop VEs on their effectiveness with the considered navigation aids. In particular, we compared navigation performance (in terms of total time to complete an informed search task) of 48 users divided into two groups: subjects in one group had experience in navigating 3D VEs while subjects in the other group did not. The experiment comprised four conditions that differed for the navigation aid that was employed. The first and the second condition, respectively, exploited 3D and 2D arrows to point towards objects that users had to reach; in the third condition, a radar metaphor was employed to show the location of objects in the VE; the fourth condition was a control condition with no location-pointing navigation aid available. The search task was performed both in a VE representing an outdoor geographic area and in an abstract VE that did not resemble any familiar environment. For each VE, users were also asked to order the four conditions according to their preference. Results show that the navigation aid based on 3D arrows outperformed (both in terms of user performance and user preference) the others, except in the case when it was used by experienced users in the geographic VE. In that case, it was as effective as the others. Finally, in the geographic VE, experienced users took significantly less time than inexperienced users to perform the informed search, while in the abstract VE the difference was significant only in the control and the radar conditions. From a more general perspective, our study highlights the need to take into specific consideration user experience in navigating VEs when designing navigation aids and evaluating their effectiveness.  相似文献   

6.
As a powerful interaction technology, haptically enhanced virtual environments (VEs) have found many useful applications. However, few studies have examined how wayfinding of users with visual impairments is affected by VE characteristics. An empirical experiment was conducted to investigate how different environmental characteristics (number of objects inside the environment, layout of the objects and density) affect task performance (completion time, completion ratio, and travel distance), perceived task difficulty, and behavior pattern (short and long pause) of users with visual impairments when they perform a wayfinding task in a desktop-based haptically enhanced VE. The present study found that the number of objects inside the environment and layout of the objects play a significant role in determining the completion time and distance traveled. Layout type also greatly affected the user’s behavioral pattern in terms of frequency of pauses. Finally, perceived task difficulty varied with different environmental characteristics. The study results should provide insight into the future research and development of haptically enhanced VEs for people with visual impairments.  相似文献   

7.
We define a virtual environment as a set of surroundings that appear to a user through computer-generated sensory stimuli. The level of immersion-or sense of being in another world-that a user experiences within a VE relates to how much stimuli the computer delivers to the user. Thus, one can classify VEs along a virtuality continuum, which ranges from the real world to an entirely computer-generated environment. We present a technology that allows seamless transitions between levels of immersion in VEs. Milgram and Kishino (1994) first proposed the concept of a virtuality continuum in the context of visual displays. The concept of a virtuality continuum extends to multimodal VEs, which combine multiple sensory stimuli, including 3D sound and haptic capability, leading to a multidimensional virtuality continuum. Emerging applications will benefit from multiple levels of immersion, requiring innovative multimodal technologies and the ability to traverse the multidimensional virtuality continuum.  相似文献   

8.
OBJECTIVE: Two experiments examined whether prior interaction within an immersive virtual environment (VE) enabled people to improve the accuracy of their distance judgments and whether an improved ability to estimate distance generalized to other means of estimating distances. BACKGROUND: Prior literature has consistently found that users of immersive VEs underestimate distances by approximately 50%. METHOD: In each of the two experiments, 16 participants viewed objects in an immersive VE and estimated their distance to them by means of blindfolded walking tasks before and after interacting with the VE. RESULTS: The interaction task significantly corrected users' underestimation bias to nearly veridical. Differences between pre- and post-interaction mean distance estimation accuracy were large (d = 4.63), and significant (p < .001), and they generalized across response task. CONCLUSION: This finding limits the generality of the underestimation effect in VEs and suggests that distance underestimation in VEs may not be a road block to the development of VE applications. APPLICATION: Potential or actual applications of this research include the improvement of VE systems requiring accurate spatial awareness.  相似文献   

9.
Motion perception in immersive virtual environments significantly differs from the real world. For example, previous work has shown that users tend to underestimate travel distances in virtual environments (VEs). As a solution to this problem, researchers proposed to scale the mapped virtual camera motion relative to the tracked real-world movement of a user until real and virtual motion are perceived as equal, i.e., real-world movements could be mapped with a larger gain to the VE in order to compensate for the underestimation. However, introducing discrepancies between real and virtual motion can become a problem, in particular, due to misalignments of both worlds and distorted space cognition. In this paper, we describe a different approach that introduces apparent self-motion illusions by manipulating optic flow fields during movements in VEs. These manipulations can affect self-motion perception in VEs, but omit a quantitative discrepancy between real and virtual motions. In particular, we consider to which regions of the virtual view these apparent self-motion illusions can be applied, i.e., the ground plane or peripheral vision. Therefore, we introduce four illusions and show in experiments that optic flow manipulation can significantly affect users' self-motion judgments. Furthermore, we show that with such manipulations of optic flow fields the underestimation of travel distances can be compensated.  相似文献   

10.
Driving simulation: challenges for VR technology   总被引:3,自引:0,他引:3  
Virtual driving environments represent a challenging test for virtual reality technology. We present an overview of our work on the problems of scenario and scene modeling for virtual environments (VEs) in the context of the Iowa Driving Simulator (IDS). The requirements of driving simulation-a deterministic real-time software system that integrates components for user interaction, simulation, and scenario and scene modeling-make it a valuable proving ground for VE technologies. The goal of our research is not simply to improve driving simulation, but to develop technology that benefits a wide variety of VE applications. For example, our work on authoring high-fidelity VE databases and on directable scenarios populated with believable agents also targets applications involving interaction with simulated, walking humans and training in the operation of complex machinery. This work has benefited greatly from the experience of developing components for a full-scale operational VE system like IDS, and we believe that many other proposed VE technologies would similarly benefit from such real-world testing  相似文献   

11.
Studies examined the potential use of VEs in teaching historical chronology to 127 children of primary school age (8-9 years). The use of passive fly-through VEs had been found, in an earlier study, to be disadvantageous with this age group when tested for their subsequent ability to place displayed sequential events in correct chronological order. All VEs in the present studies included active challenge, previously shown to enhance learning in older participants. Primary school children in the UK (all frequent computer users) were tested using UK historical materials, but no significant effect was found between three conditions (Paper, PowerPoint and VE) with minimal pre-training. However, excellent (error free) learning occurred when children were allowed greater exploration prior to training in the VE. In Ukraine, with children having much less computer familiarity, training in a VE (depicting Ukrainian history) produced better learning compared to PowerPoint, but no better than in a Paper condition. The results confirmed the benefit of using challenge in a VE with primary age children, but only with adequate prior familiarisation with the medium. Familiarity may reduce working memory load and increase children’s spatial memory capacity for acquiring sequential temporal-spatial information from virtual displays.  相似文献   

12.
Virtual environments (VEs) have been shown to be beneficial in physical rehabilitation, increasing motivation and the range of exercises that can be safely performed. However, little is known about how disabilities may impact a user's responses to a VE, which could affect rehabilitation motivation. Thus, the primary objective of this research is to understand how VEs affect users with mobility impairments (MI). Specifically, we investigate the influence of full body avatars that have canes. To begin investigating this, we designed a VE that included a range of multimodal feedback to induce a strong sense of presence and was novel to the participants. Using this VE, we conducted a study with two different populations: eight persons with MI and eight healthy persons as a control. The healthy participants were of similar demographics (e.g., age, weight, height, and previous VE experience) to the participants with MI who walked with a cane (i.e., on the basis of strict selection criteria to maintain homogeneity). This is one of the first studies to investigate how a VE can affect the gait of the users with MI, physiological response, presence, behavior, and the influence of avatars. Results of the study suggest generalizable guidelines for the design of VEs for users with MI. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
This paper describes an investigation of the types of problems that may be experienced by Virtual Reality (VR) users. Initial concerns have been voiced about various issues concerning the design of VR equipment, particularly the physical ergonomics of head-mounted displays (HMDs) and hand-held input devices, and the problems associated with display resolution and lags. This study investigated a number of VR users' perceptions of the types of physical ergonomics issues that they were aware of when participating in a number of different virtual environments (VEs), using different VR systems. Several different methods were employed, including questionnaires, body mapping, user observation and interviews. Issues highlighted as either causing participants discomfort or interfering with their experience of the VE were: discomfort from static posture requirements, general discomfort from wearing the HMD, difficulty becoming accustomed to 3D hand held input devices, dissatisfaction with deficits in the visual display and fear of getting 'tangled' in connecting cables. The implications of these findings for developers, implementers and users of VR are discussed.  相似文献   

14.
User-centered design and evaluation of virtual environments   总被引:6,自引:0,他引:6  
We present a structured, iterative methodology for user-centered design and evaluation of VE user interaction. We recommend performing (1) user task analysis followed by (2) expert guidelines-based evaluation, (3) formative user-centered evaluation, and finally (4) comparative evaluation. In this article we first give the motivation and background for our methodology, then we describe each technique in some detail. We applied these techniques to a real-world battlefield visualization VE. Finally, we evaluate why this approach provides a cost-effective strategy for assessing and iteratively improving user interaction in VEs  相似文献   

15.
This paper presents a tool for the visual analysis of navigation patterns of moving entities, such as users, virtual characters or vehicles in 3D virtual environments (VEs). The tool, called VU-Flow, provides a set of interactive visualizations that highlight interesting navigation behaviors of single or groups of moving entities that were the VE together or separately. The visualizations help to improve the design of VEs and to study the navigation behavior of users, e.g., during controlled experiments. Besides VEs, the proposed techniques could also be applied to visualize real-world data recorded by positioning systems, allowing one to employ VU-Flow in domains such as urban planning, transportation, and emergency response  相似文献   

16.
Immersive spaces such as 4-sided displays with stereo viewing and high-quality tracking provide a very engaging and realistic virtual experience. However, walking is inherently limited by the restricted physical space, both due to the screens (limited translation) and the missing back screen (limited rotation). In this paper, we propose three novel locomotion techniques that have three concurrent goals: keep the user safe from reaching the translational and rotational boundaries; increase the amount of real walking and finally, provide a more enjoyable and ecological interaction paradigm compared to traditional controller-based approaches. We notably introduce the "Virtual Companion", which uses a small bird to guide the user through VEs larger than the physical space. We evaluate the three new techniques through a user study with travel-to-target and path following tasks. The study provides insight into the relative strengths of each new technique for the three aforementioned goals. Specifically, if speed and accuracy are paramount, traditional controller interfaces augmented with our novel warning techniques may be more appropriate; if physical walking is more important, two of our paradigms (extended Magic Barrier Tape and Constrained Wand) should be preferred; last, fun and ecological criteria would favor the Virtual Companion.  相似文献   

17.
Only a few studies in the literature have focused on the effects of age on virtual environment (VE) sickness susceptibility and even less research was carried out focusing on the elderly. In general, the elderly usually browse VEs on a thin film transistor liquid crystal display (TFT-LCD) at home or somewhere, not a head-mounted display (HMD). While the TFT-LCD is used to present VEs, this set-up does not physically enclose the user. Therefore, this study investigated the factors that contribute to cybersickness among the elderly when immersed into a VE on TFT-LCD, including exposure durations, navigation rotating speeds and angle of inclination. Participants were elderly, with an average age of 69.5 years. The results of the first experiment showed that the rate of simulator sickness questionnaire (SSQ) scores increases significantly with navigational rotating speed and duration of exposure. However, the experimental data also showed that the rate of SSQ scores does not increase with the increase in angle of inclination. In applying these findings, the neuro-fuzzy technology was used to develop a neuro-fuzzy cybersickness-warning system integrating fuzzy logic reasoning and neural network learning. The contributing factors were navigational rotating speed and duration of exposure. The results of the second experiment showed that the proposed system can efficiently determine the level of cybersickness based on the associated subjective sickness estimates and combat cybersickness due to long exposure to a VE.  相似文献   

18.
It has been suggested that immersive virtual reality (VR) technology allows knowledge-building experiences and in this way provides an alternative educational process. Important key features of constructivist educational computer-based environments for science teaching and learning, include interaction, size, transduction and reification. Indeed, multi-sensory VR technology suits very well the needs of sciences that require a higher level of visualization and interaction. Haptics that refers to physical interactions with virtual environments (VEs) may be coupled with other sensory modalities such as vision and audition but are hardly ever associated with other feedback channels, such as olfactory feedback. A survey of theory and existing VEs including haptic or olfactory feedback, especially in the field of education is provided. Our multi-modal human-scale VE VIREPSE (virtual reality platform for simulation and experimentation) that provides haptic interaction using a string-based interface called SPIDAR (space interface device for artificial reality), olfactory and auditory feedbacks is described. An application that allows students experiencing the abstract concept of the Bohr atomic model and the quantization of the energy levels has been developed. Different configurations that support interaction, size and reification through the use of immersive and multi-modal (visual, haptic, auditory and olfactory) feedback are proposed for further evaluation. Haptic interaction is achieved using different techniques ranging from desktop pseudo-haptic feedback to human-scale haptic interaction. Olfactory information is provided using different fan-based olfactory displays (ODs). Significance of developing such multi-modal VEs for education is discussed.  相似文献   

19.
Head-mounted displays (HMDs) allow users to observe virtual environments (VEs) from an egocentric perspective. However, several experiments have provided evidence that egocentric distances are perceived as compressed in VEs relative to the real world. Recent experiments suggest that the virtual view frustum set for rendering the VE has an essential impact on the user's estimation of distances. In this article we analyze if distance estimation can be improved by calibrating the view frustum for a given HMD and user. Unfortunately, in an immersive virtual reality (VR) environment, a full per user calibration is not trivial and manual per user adjustment often leads to mini- or magnification of the scene. Therefore, we propose a novel per user calibration approach with optical see-through displays commonly used in augmented reality (AR). This calibration takes advantage of a geometric scheme based on 2D point - 3D line correspondences, which can be used intuitively by inexperienced users and requires less than a minute to complete. The required user interaction is based on taking aim at a distant target marker with a close marker, which ensures non-planar measurements covering a large area of the interaction space while also reducing the number of required measurements to five. We found the tendency that a calibrated view frustum reduced the average distance underestimation of users in an immersive VR environment, but even the correctly calibrated view frustum could not entirely compensate for the distance underestimation effects.  相似文献   

20.
Stereoscopic depth cues improve depth perception and increase immersion within virtual environments (VEs). However, improper display of these cues can distort perceived distances and directions. Consider a multi-user VE, where all users view identical stereoscopic images regardless of physical location. In this scenario, cues are typically customized for one "leader" equipped with a head-tracking device. This user stands at the center of projection (CoP) and all other users ("followers") view the scene from other locations and receive improper depth cues. This paper examines perceived depth distortion when viewing stereoscopic VEs from follower perspectives and the impact of these distortions on collaborative spatial judgments. Pairs of participants made collaborative depth judgments of virtual shapes viewed from the CoP or after displacement forward or backward. Forward and backward displacement caused perceived depth compression and expansion, respectively, with greater compression than expansion. Furthermore, distortion was less than predicted by a ray-intersection model of stereo geometry. Collaboration times were significantly longer when participants stood at different locations compared to the same location, and increased with greater perceived depth discrepancy between the two viewing locations. These findings advance our understanding of spatial distortions in multi-user VEs, and suggest a strategy for reducing distortion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号