首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Opportunistic Controls are a class of user interaction techniques that we have developed for augmented reality (AR) applications to support gesturing on, and receiving feedback from, otherwise unused affordances already present in the domain environment. By leveraging characteristics of these affordances to provide passive haptics that ease gesture input, Opportunistic Controls simplify gesture recognition, and provide tangible feedback to the user. In this approach, 3D widgets are tightly coupled with affordances to provide visual feedback and hints about the functionality of the control. For example, a set of buttons can be mapped to existing tactile features on domain objects. We describe examples of Opportunistic Controls that we have designed and implemented using optical marker tracking, combined with appearance-based gesture recognition. We present the results of two user studies. In the first, participants performed a simulated maintenance inspection of an aircraft engine using a set of virtual buttons implemented both as Opportunistic Controls and using simpler passive haptics. Opportunistic Controls allowed participants to complete their tasks significantly faster and were preferred over the baseline technique. In the second, participants proposed and demonstrated user interfaces incorporating Opportunistic Controls for two domains, allowing us to gain additional insights into how user interfaces featuring Opportunistic Controls might be designed.  相似文献   

2.
Ubiquitous computing is a challenging area that allows us to further our understanding and techniques of context-aware and adaptive systems. Among the challenges is the general problem of capturing the larger context in interaction from the perspective of user modeling and human–computer interaction (HCI). The imperative to address this issue is great considering the emergence of ubiquitous and mobile computing environments. This paper provides an account of our addressing the specific problem of supporting functionality as well as the experience design issues related to museum visits through user modeling in combination with an audio augmented reality and tangible user interface system. This paper details our deployment and evaluation of ec(h)o – an augmented audio reality system for museums. We explore the possibility of supporting a context-aware adaptive system by linking environment, interaction objects and users at an abstract semantic level instead of at the content level. From the user modeling perspective ec(h)o is a knowledge-based recommender system. In this paper we present our findings from user testing and how our approach works well with an audio and tangible user interface within a ubiquitous computing system. We conclude by showing where further research is needed.  相似文献   

3.
Recent user interface concepts, such as multimedia, multimodal, wearable, ubiquitous, tangible, or augmented-reality-based (AR) interfaces, each cover different approaches that are all needed to support complex human–computer interaction. Increasingly, an overarching approach towards building what we call ubiquitous augmented reality (UAR) user interfaces that include all of the just mentioned concepts will be required. To this end, we present a user interface architecture that can form a sound basis for combining several of these concepts into complex systems. We explain in this paper the fundamentals of DWARFs user interface framework (DWARF standing for distributed wearable augmented reality framework) and an implementation of this architecture. Finally, we present several examples that show how the framework can form the basis of prototypical applications.  相似文献   

4.
Several studies have been carried out on augmented reality (AR)-based environments that deal with user interfaces for manipulating and interacting with virtual objects aimed at improving immersive feeling and natural interaction. Most of these studies have utilized AR paddles or AR cubes for interactions. However, these interactions overly constrain the users in their ability to directly manipulate AR objects and are limited in providing natural feeling in the user interface. This paper presents a novel approach to natural and intuitive interactions through a direct hand touchable interface in various AR-based user experiences. It combines markerless augmented reality with a depth camera to effectively detect multiple hand touches in an AR space. Furthermore, to simplify hand touch recognition, the point cloud generated by Kinect is analyzed and filtered out. The proposed approach can easily trigger AR interactions, and allows users to experience more intuitive and natural sensations and provides much control efficiency in diverse AR environments. Furthermore, it can easily solve the occlusion problem of the hand and arm region inherent in conventional AR approaches through the analysis of the extracted point cloud. We present the effectiveness and advantages of the proposed approach by demonstrating several implementation results such as interactive AR car design and touchable AR pamphlet. We also present an analysis of a usability study to compare the proposed approach with other well-known AR interactions.  相似文献   

5.
Toward spontaneous interaction with the Perceptive Workbench   总被引:1,自引:0,他引:1  
Until now, we have interacted with computers mostly by using wire-based devices. Typically, the wires limit the distance of movement and inhibit freedom of orientation. In addition, most interactions are indirect. The user moves a device as an analog for the action created in the display space. We envision an untethered interface that accepts gestures directly and can accept any objects we choose as interactors. We discuss methods for producing more seamless interaction between the physical and virtual environments through the Perceptive Workbench. We applied the system to an augmented reality game and a terrain navigating system. The Perceptive Workbench can reconstruct 3D virtual representations of previously unseen real-world objects placed on its surface. In addition, the Perceptive Workbench identifies and tracks such objects as they are manipulated on the desk's surface and allows the user to interact with the augmented environment through 2D and 3D gestures  相似文献   

6.
In this paper, we propose an approach to tangible augmented reality (AR) based design evaluation of information appliances, which not only exploits the use of tangible objects without hardwired connections to provide better visual immersion and support more tangible interaction, but also facilitates the adoption of a simple and low cost AR environment setup to improve user experience and performance. To enhance the visual immersion, we develop a solution for resolving hand occlusion in which skin color information is exploited with the use of the tangible objects to detect the hand regions properly. To improve the tangible interaction with the sense of touch, we introduce the use of product- and fixture-type objects, which provides the feelings of holding the product in his or her hands and touching buttons with his or her index fingertip in the AR setup. To improve user experience and performance in view of hardware configuration, we devise to adopt a simple and cost-effective AR setup that properly meets guidelines such as viewing size and distance, working posture, viewpoint matching, and camera movement. From experimental results, we found that the AR setup is good to improve the user experience and performance in design evaluation of handheld information appliances. We also found that the tangible interaction combined with the hand occlusion solver in the AR setup is very useful to improve tangible interaction and immersive visualization of virtual products while making the user experience the shapes and functions of the products well and comfortably.  相似文献   

7.
We describe an augmented reality (AR) system that allows multiple participants to interact with 2D and 3D data using tangible user interfaces. The system features face-to-face communication, collaborative viewing and manipulation of 3D models, and seamless access to 2D desktop applications within the shared 3D space. All virtual content, including 3D models and 2D desktop windows, is attached to tracked physical objects in order to leverage the efficiencies of natural two-handed manipulation. The presence of 2D desktop space within 3D facilitates data exchange between the two realms, enables control of 3D information by 2D applications, and generally increases productivity by providing access to familiar tools. We present a general concept for a collaborative tangible AR system, including a comprehensive set of interaction techniques, a distributed hardware setup, and a component-based software architecture that can be flexibly configured using XML. We show the validity of our concept with an implementation of an application scenario from the automotive industry.  相似文献   

8.
This paper proposes tangible interfaces and interactions for authoring 3D virtual and immersive scenes easily and intuitively in tangible augmented reality (AR) environment. It provides tangible interfaces for manipulating virtual objects in a natural and intuitive manner and supports adaptive and accurate vision-based tracking in AR environments. In particular, RFID is used to directly integrate physical objects with virtual objects and to systematically support the tangible query of the relation between physical objects and virtual ones, which can provide more intuitive tangibility and a new way of virtual object manipulation. Moreover, the proposed approach offers an easy and intuitive switching mechanism between tangible environment and virtual environment. This paper also proposes a context-adaptive marker tracking method which can remove an inconsistent problem while embedding virtual objects into physical ones in tangible AR environments. The context-adaptive tracking method not only adjusts the locations of invisible markers by interpolating the locations of existing reference markers and those of previous ones, but also removes a jumping effect of movable virtual objects when their references are changed from one marker to another. Several case studies for generating tangible virtual scenes and comparison with previous work are given to show the effectiveness and novelty of the proposed approach.  相似文献   

9.
Distributed Augmented Reality for Collaborative Design Applications   总被引:1,自引:0,他引:1  
This paper presents a system for constructing collaborative design applications based on distributed augmented reality. Augmented reality interfaces are a natural method for presenting computer-based design by merging graphics with a view of the real world. Distribution enables users at remote sites to collaborate on design tasks. The users interactively control their local view, try out design options, and communicate design proposals. They share virtual graphical objects that substitute for real objects which are not yet physically created or are not yet placed into the real design environment. We describe the underlying augmented reality system and in particular how it has been extended in order to support multi-user collaboration. The construction of distributed augmented reality applications is made easier by a separation of interface, interaction and distribution issues. An interior design application is used as an example to demonstrate the advantages of our approach.  相似文献   

10.
Most augmented reality (AR) applications are primarily concerned with letting a user browse a 3D virtual world registered with the real world. More advanced AR interfaces let the user interact with the mixed environment, but the virtual part is typically rather finite and deterministic. In contrast, autonomous behavior is often desirable in ubiquitous computing (Ubicomp), which requires the computers embedded into the environment to adapt to context and situation without explicit user intervention. We present an AR framework that is enhanced by typical Ubicomp features by dynamically and proactively exploiting previously unknown applications and hardware devices, and adapting the appearance of the user interface to persistently stored and accumulated user preferences. Our framework explores proactive computing, multi‐user interface adaptation, and user interface migration. We employ mobile and autonomous agents embodied by real and virtual objects as an interface and interaction metaphor, where agent bodies are able to opportunistically migrate between multiple AR applications and computing platforms to best match the needs of the current application context. We present two pilot applications to illustrate design concepts. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

11.
In this paper, we describe a user study evaluating the usability of an augmented reality (AR) multimodal interface (MMI). We have developed an AR MMI that combines free-hand gesture and speech input in a natural way using a multimodal fusion architecture. We describe the system architecture and present a study exploring the usability of the AR MMI compared with speech-only and 3D-hand-gesture-only interaction conditions. The interface was used in an AR application for selecting 3D virtual objects and changing their shape and color. For each interface condition, we measured task completion time, the number of user and system errors, and user satisfactions. We found that the MMI was more usable than the gesture-only interface conditions, and users felt that the MMI was more satisfying to use than the speech-only interface conditions; however, it was neither more effective nor more efficient than the speech-only interface. We discuss the implications of this research for designing AR MMI and outline directions for future work. The findings could also be used to help develop MMIs for a wider range of AR applications, for example, in AR navigation tasks, mobile AR interfaces, or AR game applications.  相似文献   

12.
In the transition from a device-oriented paradigm toward a more task-oriented paradigm with increased interoperability, people are struggling with inappropriate user interfaces, competing standards, technical incompatibilities, and other difficulties. The current handles for users to explore, make, and break connections between devices seem to disappear in overly complex menu structures displayed on small screens. This paper tackles the problem of establishing connections between devices in a smart home environment, by introducing an interaction model that we call semantic connections. Two prototypes are demonstrated that introduce both a tangible and an augmented reality approach toward exploring, making, and breaking connections. In the augmented reality approach, connections between real-world objects are visualized by displaying visible lines and icons from a mobile device containing a pico projector. In the tangible approach, objects in the environment are tagged and can be scanned and interacted with, to explore connection possibilities, and manipulate the connections. We discuss the technical implementation of a pilot study setup used to evaluate both our interaction approaches. We conclude the paper with the results of a user study that shows how the interaction approaches influence the mental models users construct after interacting with our setup.  相似文献   

13.
In this paper, we present a believable interaction mechanism for manipulation multiple objects in ubiquitous/augmented virtual environment. A believable interaction in multimodal framework is defined as a persistent and consistent process according to contextual experiences and common‐senses on the feedbacks. We present a tabletop interface as a quasi‐tangible framework to provide believable processes. An enhanced tabletop interface is designed to support multimodal environment. As an exemplar task, we applied the concept to fast accessing and manipulating distant objects. A set of enhanced manipulation mechanisms is presented for remote manipulations including inertial widgets, transformable tabletop, and proxies. The proposed method is evaluated in both performance and user acceptability in comparison with previous approaches. The proposed technique uses intuitive hand gestures and provides higher level of believability. It can also support other types of accessing techniques such as browsing and manipulation. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

14.
The paper presents different issues dealing with both the preservation of cultural heritage using virtual reality (VR) and augmented reality (AR) technologies in a cultural context. While the VR/AR technologies are mentioned, the attention is paid to the 3D visualization, and 3D interaction modalities illustrated through three different demonstrators: the VR demonstrators (immersive and semi-immersive) and the AR demonstrator including tangible user interfaces. To show the benefits of the VR and AR technologies for studying and preserving cultural heritage, we investigated the visualisation and interaction with reconstructed underwater archaeological sites. The base idea behind using VR and AR techniques is to offer archaeologists and general public new insights on the reconstructed archaeological sites allowing archaeologists to study directly from within the virtual site and allowing the general public to immersively explore a realistic reconstruction of the sites. Both activities are based on the same VR engine, but drastically differ in the way they present information and exploit interaction modalities. The visualisation and interaction techniques developed through these demonstrators are the results of the ongoing dialogue between the archaeological requirements and the technological solutions developed.  相似文献   

15.
ABSTRACT

Most studies on tangible user interfaces for the tabletop design systems are being undertaken from a technology viewpoint. Although there have been studies that focus on the development of new interactive environments employing tangible user interfaces for designers, there is a lack of evaluation with respect to designers' spatial cognition. In this research we study the effects of tangible user interfaces on designers' spatial cognition to provide empirical evidence for the anecdotal views of the effect of tangible user interfaces. To highlight the expected changes in spatial cognition while using tangible user interfaces, we compared designers using a tangible user interface on a tabletop system with 3D blocks to designers using a graphical user interface on a desktop computer with a mouse and keyboard. The ways in which designers use the two different interfaces for 3D design were examined using a protocol analysis method. The result reveals that designers using 3D blocks perceived more spatial relationships among multiple objects and spaces and discovered new visuo-spatial features when revisiting their design configurations. The designers using the tangible interfaces spent more time in relocating objects to different locations to test the moves, and interacted with the external representation through large body movements implying an immersion in the design model. These two physical actions assist in designers' spatial cognition by reducing cognitive load in mental visual reasoning. Further, designers using the tangible interfaces spent more time in restructuring the design problem by introducing new functional issues as design requirements and produced more discontinuities to the design processes, which provides opportunity for reflection and modification of the design. Therefore this research shows that tangible user interfaces changes designers' spatial cognition, and the changes of the spatial cognition are associated with creative design processes.  相似文献   

16.
This paper presents a geospatial collision detection technique consisting of two methods: Find Object Distance (FOD) and Find Reflection Angle (FRA). We show how the geospatial collision detection technique using a computer vision system detects a computer generated virtual object and a real object manipulated by a human user and how the virtual object can be reflected on a real floor after being detected by a real object. In the geospatial collision detection technique, the FOD method detects the real and virtual objects, and the FRA method predicts the next moving directions of virtual objects. We demonstrate the two methods by implementing a floor based Augmented Reality (AR) game, Ting Ting, which is played by bouncing fire-shaped virtual objects projected on a floor using bamboo-shaped real objects. The results reveal that the FOD and the FRA methods of the geospatial collision detection technique enable the smooth interaction between a real object manipulated by a human user and a virtual object controlled by a computer. The proposed technique is expected to be used in various AR applications as a low cost interactive collision detection engine such as in educational materials, interactive contents including games, and entertainment equipments. Keywords: Augmented reality, collision detection, computer vision, game, human computer interaction, image processing, interfaces.  相似文献   

17.
In this work we integrate augmented reality technology in a product development process using real technical drawings as a tangible interface for design review. We present an original collaborative framework for Augmented Design Review Over Network (ADRON). It provides the following features: augmented technical drawings, interactive FEM simulation, multimodal annotation and chat tools, web content integration and collaborative client/server architecture. Our framework is intended to use common hardware instead of expensive and complex virtual or augmented facilities. We designed the interface specifically for users with little or no augmented reality expertise proposing tangible interfaces for data review and visual editing for all the functions and configurations. Two case studies are presented and discussed: a real-time “touch and see” stress/strain simulation and a collaborative distributed design review session of an industrial component.  相似文献   

18.
This paper proposes an augmented reality content authoring system that enables ordinary users who do not have programming capabilities to easily apply interactive features to virtual objects on a marker via gestures. The purpose of this system is to simplify augmented reality (AR) technology usage for ordinary users, especially parents and preschool children who are unfamiliar with AR technology. The system provides an immersive AR environment with a head-mounted display and recognizes users’ gestures via an RGB-D camera. Users can freely create the AR content that they will be using without any special programming ability simply by connecting virtual objects stored in a database to the system. Following recognition of the marker via the system’s RGB-D camera worn by the user, he/she can apply various interactive features to the marker-based AR content using simple gestures. Interactive features applied to AR content can enlarge, shrink, rotate, and transfer virtual objects with hand gestures. In addition to this gesture-interactive feature, the proposed system also allows for tangible interaction using markers. The AR content that the user edits is stored in a database, and is retrieved whenever the markers are recognized. The results of comparative experiments conducted indicate that the proposed system is easier to use and has a higher interaction satisfaction level than AR environments such as fixed-monitor and touch-based interaction on mobile screens.  相似文献   

19.
Building a human‐centered editable world can be fully realized in a virtual environment. Both mixed reality (MR) and virtual reality (VR) are feasible solutions to support the attribute of edition. Based on the current development of MR and VR, we present the vision‐tangible interactive display method and its implementation in both MR and VR. We address the issue of MR and VR together because they are similar regarding the proposed method. The editable mixed and virtual reality system is useful for studies, which exploit it as a platform. In this paper, we construct a virtual reality environment based on the Oculus Rift, and an MR system based on a binocular optical see‐through head‐mounted display. In the MR system about manipulating the Rubik's cube, and the VR system about deforming the virtual objects, the proposed vision‐tangible interactive display method is utilized to provide users with a more immersive environment. Experimental results indicate that the vision‐tangible interactive display method can improve the user experience and can be a promising way to make the virtual environment better.  相似文献   

20.
Through the rapid spread of smartphones, users have access to many types of applications similar to those on desktop computer systems. Smartphone applications using augmented reality (AR) technology make use of users' location information. As AR applications will require new evaluation methods, improved usability and user convenience should be developed. The purpose of the current study is to develop usability principles for the development and evaluation of smartphone applications using AR technology. We develop usability principles for smartphone AR applications by analyzing existing research about heuristic evaluation methods, design principles for AR systems, guidelines for handheld mobile device interfaces, and usability principles for the tangible user interface. We conducted a heuristic evaluation for three popularly used smartphone AR applications to identify usability problems. We suggested new design guidelines to solve the identified problems. Then, we developed an improved AR application prototype of an Android-based smartphone, which later was conducted a usability testing to validate the effects of usability principles.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号