首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 22 毫秒
1.
This article addresses the problem of creating interactive mixed reality applications where virtual objects interact in images of real world scenarios. This is relevant to create games and architectural or space planning applications that interact with visual elements in the images such as walls, floors and empty spaces. These scenarios are intended to be captured by the users with regular cameras or using previously taken photographs. Introducing virtual objects in photographs presents several challenges, such as pose estimation and the creation of a visually correct interaction between virtual objects and the boundaries of the scene. The two main research questions addressed in this article include, the study of the feasibility of creating interactive augmented reality (AR) applications where virtual objects interact in a real world scenario using the image detected high-level features and, also, verifying if untrained users are capable and motivated enough to perform AR initialization steps. The proposed system detects the scene automatically from an image with additional features obtained using basic annotations from the user. This operation is significantly simple to accommodate the needs of non-expert users. The system analyzes one or more photos captured by the user and detects high-level features such as vanishing points, floor and scene orientation. Using these features it will be possible to create mixed and augmented reality applications where the user interactively introduces virtual objects that blend with the picture in real time and respond to the physical environment. To validate the solution several system tests are described and compared using available external image datasets.  相似文献   

2.
Several studies have been carried out on augmented reality (AR)-based environments that deal with user interfaces for manipulating and interacting with virtual objects aimed at improving immersive feeling and natural interaction. Most of these studies have utilized AR paddles or AR cubes for interactions. However, these interactions overly constrain the users in their ability to directly manipulate AR objects and are limited in providing natural feeling in the user interface. This paper presents a novel approach to natural and intuitive interactions through a direct hand touchable interface in various AR-based user experiences. It combines markerless augmented reality with a depth camera to effectively detect multiple hand touches in an AR space. Furthermore, to simplify hand touch recognition, the point cloud generated by Kinect is analyzed and filtered out. The proposed approach can easily trigger AR interactions, and allows users to experience more intuitive and natural sensations and provides much control efficiency in diverse AR environments. Furthermore, it can easily solve the occlusion problem of the hand and arm region inherent in conventional AR approaches through the analysis of the extracted point cloud. We present the effectiveness and advantages of the proposed approach by demonstrating several implementation results such as interactive AR car design and touchable AR pamphlet. We also present an analysis of a usability study to compare the proposed approach with other well-known AR interactions.  相似文献   

3.
Modeling tools typically have their own interaction methods for combining virtual objects. For realistic composition in 3D space, many researchers from the fields of virtual and augmented reality have been trying to develop intuitive interactive techniques using novel interfaces. However, many modeling applications require a long learning time for novice users because of unmanageable interfaces. In this paper, we propose two-handed tangible augmented reality interaction techniques that provide an easy-to-learn and natural combination method using simple augmented blocks. We have designed a novel interface called the cubical user interface, which has two tangible cubes that are tracked by marker tracking. Using the interface, we suggest two types of interactions based on familiar metaphors from real object assembly. The first, the screw-driving method, recognizes the user??s rotation gestures and allows them to screw virtual objects together. The second, the block-assembly method, adds objects based on their direction and position relative to predefined structures. We evaluate the proposed methods in detail with a user experiment that compares the different methods.  相似文献   

4.
Multithreaded Hybrid Feature Tracking for Markerless Augmented Reality   总被引:1,自引:0,他引:1  
We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.  相似文献   

5.
增强现实技术是近年来人机交互领域的研究热点。在增强现实环境下加入触觉感知,可使用户在真实场景中看到并感知到虚拟对象。为了实现增强现实环境下与虚拟对象之间更加自然的交互,提出一种视触觉融合的三维注册方法。基于图像视觉技术获得三维注册矩阵;借助空间转换关系求解出触觉空间与图像空间的转换关系;结合两者与摄像头空间的关系实现视触觉融合的增强现实交互场景。为验证该方法的有效性,设计了一个基于视触觉增强现实的组装机器人项目。用户可触摸并移动真实环境中的机器人零件,还能在触摸时感受到反馈力,使交互更具真实感。  相似文献   

6.
Most augmented reality (AR) applications are primarily concerned with letting a user browse a 3D virtual world registered with the real world. More advanced AR interfaces let the user interact with the mixed environment, but the virtual part is typically rather finite and deterministic. In contrast, autonomous behavior is often desirable in ubiquitous computing (Ubicomp), which requires the computers embedded into the environment to adapt to context and situation without explicit user intervention. We present an AR framework that is enhanced by typical Ubicomp features by dynamically and proactively exploiting previously unknown applications and hardware devices, and adapting the appearance of the user interface to persistently stored and accumulated user preferences. Our framework explores proactive computing, multi‐user interface adaptation, and user interface migration. We employ mobile and autonomous agents embodied by real and virtual objects as an interface and interaction metaphor, where agent bodies are able to opportunistically migrate between multiple AR applications and computing platforms to best match the needs of the current application context. We present two pilot applications to illustrate design concepts. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

7.
Toward spontaneous interaction with the Perceptive Workbench   总被引:1,自引:0,他引:1  
Until now, we have interacted with computers mostly by using wire-based devices. Typically, the wires limit the distance of movement and inhibit freedom of orientation. In addition, most interactions are indirect. The user moves a device as an analog for the action created in the display space. We envision an untethered interface that accepts gestures directly and can accept any objects we choose as interactors. We discuss methods for producing more seamless interaction between the physical and virtual environments through the Perceptive Workbench. We applied the system to an augmented reality game and a terrain navigating system. The Perceptive Workbench can reconstruct 3D virtual representations of previously unseen real-world objects placed on its surface. In addition, the Perceptive Workbench identifies and tracks such objects as they are manipulated on the desk's surface and allows the user to interact with the augmented environment through 2D and 3D gestures  相似文献   

8.
Registering virtual objects with reference to their corresponding real-world objects plays a key role in augmented reality (AR) system. Although there have been a lot of work on using vision-based method to perform registration for indoor AR system, it is very difficult to apply such registration method for outdoor AR system due to the inability to modify the objects in outdoor environment and the huge range of working area. 3D Geographic Information System (GIS) is capable of providing an outdoor virtual geographic environment where users are located at, which may provide users with a corresponding virtual object for the one in the physical world. In this study, a 3D GIS-based registration mechanism is proposed for outdoor AR system. Specifically, an easy-use interactive method for precise registration was developed to improve the performance of the registration. To implement the registration mechanism, an outdoor AR system built upon 3D GIS was developed, named Augmented Reality Geographical Information System (ARGIS). ARGIS has the capability of performing precise registration in outdoor environment without using traditional vision tracking method, which thus enables users to arbitrarily manipulate the system. A prototype was developed to conduct experiment on the campus of Peking University, Beijing, China to test the proposed registration mechanism. The experiment shows that the developed registration mechanism is feasible and efficient in the outdoor environment. The ARGIS is expected to enrich the applications of outdoor AR system, including but not limited to underground facility mapping, emergency rescue and urban planning.  相似文献   

9.
增强现实是把计算机产生的虚拟物体合成到用户看到的真实世界中的一种技术。介绍了增强现实中虚拟物体所涉及的一些关键技术,包括虚拟物体的建模、摄像头标定、骨骼动画以及虚拟场景的优化,说明了如何将这些技术应用到实际系统中。  相似文献   

10.
A new vision-based framework and system for human–computer interaction in an Augmented Reality environment is presented in this article. The system allows the users to interact with computer-generated virtual objects using their hands directly. With an efficient color segmentation algorithm, the system is adaptable to different light conditions and backgrounds. It is also suitable for real-time applications. The dominant features on the palm are detected and tracked to estimate the camera pose. After the camera pose relative to the user's hand has been reconstructed, 3D virtual objects can be augmented naturally onto the palm for the user to inspect and manipulate. With off-the-shelf web camera and computer, natural bare-hand based interactions with 2D and 3D virtual objects can be achieved with low cost.  相似文献   

11.
Natural user interfaces (NUIs) provide human computer interaction (HCI) with natural and intuitive operation interfaces, such as using human gestures and voice. We have developed a real-time NUI engine architecture using a web camera as a means of implementing NUI applications. The system captures video via the web camera, implements real-time image processing using graphic processing unit (GPU) programming. This paper describes the architecture of the engine and the real-virtual environment interaction methods, such as foreground segmentation and hand gesture recognition. These methods are implemented using GPU programming in order to realize real-time image processing for HCI. To verify the efficacy of our proposed NUI engine, we utilized it in the development and implementation of several mixed reality games and touch-less operation applications, using the developed NUI engine and the DirectX SDK. Our results confirm that the methods implemented by the engine operate in real time and the interactive operations are intuitive.  相似文献   

12.
将虚拟现实技术应用到康复医学领域,可有效克服传统康复训练方法的局限性,实现安全、舒适和主动的康复训 练。本文设计并实现了一套虚拟现实手部康复训练系统,系统由交互设备、人机交互软件和虚拟环境三部分组成。交互 设备采用 5DT 公司生产的 5DT Data Glove 14 Ultra 数据手套,而人机交互软件运用 Visual Studio 2012 作为开发工具,基于 MFC 编写,实现了用户管理、数据采集、手势信号分类、实时手势识别测试等功能。构建的虚拟场景使用 Flash 游戏, 通过 MFC 和 Flash 游戏之间通讯使用者能使用手势信号实现游戏操控。本文的实验结果表明:虚拟现实手部康复训练系 统能够指导使用者进行有效的手部康复训练,Flash 康复训练游戏能有效提高使用者进行康复训练的积极性和主动性。  相似文献   

13.
Automated virtual camera control has been widely used in animation and interactive virtual environments. We have developed a multiple sparse camera based free view video system prototype that allows users to control the position and orientation of a virtual camera, enabling the observation of a real scene in three dimensions (3D) from any desired viewpoint. Automatic camera control can be activated to follow selected objects by the user. Our method combines a simple geometric model of the scene composed of planes (virtual environment), augmented with visual information from the cameras and pre-computed tracking information of moving targets to generate novel perspective corrected 3D views of the virtual camera and moving objects. To achieve real-time rendering performance, view-dependent textured mapped billboards are used to render the moving objects at their correct locations and foreground masks are used to remove the moving objects from the projected video streams. The current prototype runs on a PC with a common graphics card and can generate virtual 2D views from three cameras of resolution 768×576 with several moving objects at about 11 fps.  相似文献   

14.
In this paper, we propose an approach to tangible augmented reality (AR) based design evaluation of information appliances, which not only exploits the use of tangible objects without hardwired connections to provide better visual immersion and support more tangible interaction, but also facilitates the adoption of a simple and low cost AR environment setup to improve user experience and performance. To enhance the visual immersion, we develop a solution for resolving hand occlusion in which skin color information is exploited with the use of the tangible objects to detect the hand regions properly. To improve the tangible interaction with the sense of touch, we introduce the use of product- and fixture-type objects, which provides the feelings of holding the product in his or her hands and touching buttons with his or her index fingertip in the AR setup. To improve user experience and performance in view of hardware configuration, we devise to adopt a simple and cost-effective AR setup that properly meets guidelines such as viewing size and distance, working posture, viewpoint matching, and camera movement. From experimental results, we found that the AR setup is good to improve the user experience and performance in design evaluation of handheld information appliances. We also found that the tangible interaction combined with the hand occlusion solver in the AR setup is very useful to improve tangible interaction and immersive visualization of virtual products while making the user experience the shapes and functions of the products well and comfortably.  相似文献   

15.
The availability of powerful consumer-level smart devices and off-the-shelf software frameworks has tremendously popularized augmented reality (AR) applications. However, since the built-in cameras typically have rather limited field of view, it is usually preferable to position AR tools built upon these devices at a distance when large objects need to be tracked for augmentation. This arrangement makes it difficult or even impossible to physically interact with the augmented object. One solution is to adopt third person perspective (TPP) with which the smart device shows in real time the object to be interacted with, the AR information and the user herself, all captured by a remote camera. Through mental transformation between the user-centric coordinate space and the coordinate system of the remote camera, the user can directly interact with objects in the real world. To evaluate user performance under this cognitively demanding situation, we developed such an experimental TPP AR system and conducted experiments which required subjects to make markings on a whiteboard according to virtual marks displayed by the AR system. The same markings were also made manually with a ruler. We measured the precision of the markings as well as the time to accomplish the task. Our results show that although the AR approach was on average around half a centimeter less precise than the manual measurement, it was approximately three times as fast as the manual counterpart. Additionally, we also found that subjects could quickly adapt to the mental transformation between the two coordinate systems.  相似文献   

16.
The goal of this research is to explore new interaction metaphors for augmented reality on mobile phones, i.e. applications where users look at the live image of the device’s video camera and 3D virtual objects enrich the scene that they see. Common interaction concepts for such applications are often limited to pure 2D pointing and clicking on the device’s touch screen. Such an interaction with virtual objects is not only restrictive but also difficult, for example, due to the small form factor. In this article, we investigate the potential of finger tracking for gesture-based interaction. We present two experiments evaluating canonical operations such as translation, rotation, and scaling of virtual objects with respect to performance (time and accuracy) and engagement (subjective user feedback). Our results indicate a high entertainment value, but low accuracy if objects are manipulated in midair, suggesting great possibilities for leisure applications but limited usage for serious tasks.  相似文献   

17.
基于ORB自然特征的AR实时系统实现   总被引:3,自引:2,他引:1  
针对当前基于自然特征的增强现实效率低,提出一种新的基于ORB自然特征的实时注册方法。提取视频帧与基准图像中的ORB特征点,使用汉明距离匹配,利用RANSAC算法筛选得到最佳匹配点对,确定摄像机的位姿。将三维虚拟物体叠加到真实场景中,达到虚实结合的效果。实验表明,在不同尺度角度、一定环境光变化、复杂背景和基准图像部分遮挡的情况下,该AR系统都具有良好的性能,跟踪定位准确度高,速度基本达到实时要求。  相似文献   

18.
This paper presents a geospatial collision detection technique consisting of two methods: Find Object Distance (FOD) and Find Reflection Angle (FRA). We show how the geospatial collision detection technique using a computer vision system detects a computer generated virtual object and a real object manipulated by a human user and how the virtual object can be reflected on a real floor after being detected by a real object. In the geospatial collision detection technique, the FOD method detects the real and virtual objects, and the FRA method predicts the next moving directions of virtual objects. We demonstrate the two methods by implementing a floor based Augmented Reality (AR) game, Ting Ting, which is played by bouncing fire-shaped virtual objects projected on a floor using bamboo-shaped real objects. The results reveal that the FOD and the FRA methods of the geospatial collision detection technique enable the smooth interaction between a real object manipulated by a human user and a virtual object controlled by a computer. The proposed technique is expected to be used in various AR applications as a low cost interactive collision detection engine such as in educational materials, interactive contents including games, and entertainment equipments. Keywords: Augmented reality, collision detection, computer vision, game, human computer interaction, image processing, interfaces.  相似文献   

19.
In this paper, we describe a user study evaluating the usability of an augmented reality (AR) multimodal interface (MMI). We have developed an AR MMI that combines free-hand gesture and speech input in a natural way using a multimodal fusion architecture. We describe the system architecture and present a study exploring the usability of the AR MMI compared with speech-only and 3D-hand-gesture-only interaction conditions. The interface was used in an AR application for selecting 3D virtual objects and changing their shape and color. For each interface condition, we measured task completion time, the number of user and system errors, and user satisfactions. We found that the MMI was more usable than the gesture-only interface conditions, and users felt that the MMI was more satisfying to use than the speech-only interface conditions; however, it was neither more effective nor more efficient than the speech-only interface. We discuss the implications of this research for designing AR MMI and outline directions for future work. The findings could also be used to help develop MMIs for a wider range of AR applications, for example, in AR navigation tasks, mobile AR interfaces, or AR game applications.  相似文献   

20.
Humans use a combination of gesture and speech to interact with objects and usually do so more naturally without holding a device or pointer. We present a system that incorporates user body-pose estimation, gesture recognition and speech recognition for interaction in virtual reality environments. We describe a vision-based method for tracking the pose of a user in real time and introduce a technique that provides parameterized gesture recognition. More precisely, we train a support vector classifier to model the boundary of the space of possible gestures, and train Hidden Markov Models (HMM) on specific gestures. Given a sequence, we can find the start and end of various gestures using a support vector classifier, and find gesture likelihoods and parameters with a HMM. A multimodal recognition process is performed using rank-order fusion to merge speech and vision hypotheses. Finally we describe the use of our multimodal framework in a virtual world application that allows users to interact using gestures and speech.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号