首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Accurately understanding a user’s intention is often essential to the success of any interactive system. An information retrieval system, for example, should address the vocabulary problem (Furnas et al., 1987) to accommodate different query terms users may choose. A system that supports natural user interaction (e.g., full-body game and immersive virtual reality) must recognize gestures that are chosen by users for an action. This article reports an experimental study on the gesture choice for tasks in three application domains. We found that the chance for users to produce the same gesture for a given task is below 0.355 on average, and offering a set of gesture candidates can improve the agreement score. We discuss the characteristics of those tasks that exhibit the gesture disagreement problem and those tasks that do not. Based on our findings, we propose some design guidelines for free-hand gesture-based interfaces.  相似文献   

2.
《Artificial Intelligence》2007,171(8-9):568-585
Head pose and gesture offer several conversational grounding cues and are used extensively in face-to-face interaction among people. To accurately recognize visual feedback, humans often use contextual knowledge from previous and current events to anticipate when feedback is most likely to occur. In this paper we describe how contextual information can be used to predict visual feedback and improve recognition of head gestures in human–computer interfaces. Lexical, prosodic, timing, and gesture features can be used to predict a user's visual feedback during conversational dialog with a robotic or virtual agent. In non-conversational interfaces, context features based on user–interface system events can improve detection of head gestures for dialog box confirmation or document browsing. Our user study with prototype gesture-based components indicate quantitative and qualitative benefits of gesture-based confirmation over conventional alternatives. Using a discriminative approach to contextual prediction and multi-modal integration, performance of head gesture detection was improved with context features even when the topic of the test set was significantly different than the training set.  相似文献   

3.
In human–human communication we can adapt or learn new gestures or new users using intelligence and contextual information. Achieving natural gesture-based interaction between humans and robots, the system should be adaptable to new users, gestures and robot behaviors. This paper presents an adaptive visual gesture recognition method for human–robot interaction using a knowledge-based software platform. The system is capable of recognizing users, static gestures comprised of the face and hand poses, and dynamic gestures of face in motion. The system learns new users, poses using multi-cluster approach, and combines computer vision and knowledge-based approaches in order to adapt to new users, gestures and robot behaviors. In the proposed method, a frame-based knowledge model is defined for the person-centric gesture interpretation and human–robot interaction. It is implemented using the frame-based Software Platform for Agent and Knowledge Management (SPAK). The effectiveness of this method has been demonstrated by an experimental human–robot interaction system using a humanoid robot ‘Robovie’.  相似文献   

4.
Humans use a combination of gesture and speech to interact with objects and usually do so more naturally without holding a device or pointer. We present a system that incorporates user body-pose estimation, gesture recognition and speech recognition for interaction in virtual reality environments. We describe a vision-based method for tracking the pose of a user in real time and introduce a technique that provides parameterized gesture recognition. More precisely, we train a support vector classifier to model the boundary of the space of possible gestures, and train Hidden Markov Models (HMM) on specific gestures. Given a sequence, we can find the start and end of various gestures using a support vector classifier, and find gesture likelihoods and parameters with a HMM. A multimodal recognition process is performed using rank-order fusion to merge speech and vision hypotheses. Finally we describe the use of our multimodal framework in a virtual world application that allows users to interact using gestures and speech.  相似文献   

5.
Recently, studies on gesture-based interfaces have made an effort to improve the intuitiveness of gesture commands by asking users to define a gesture for a command. However, there are few methods to organize and notate user-defined gestures in a systematic approach. To resolve this, we propose a three-dimensional (3D) Hand Gesture Taxonomy and Notation Method. We first derived elements of a hand gesture by analyzing related studies and subsequently developed the 3D Hand Gesture Taxonomy based on the elements. Moreover, we devised a Notation Method based on a combination of the elements and also matched a code to each element for easy notation. Finally, we have verified the usefulness of the Notation Method by training participants to notate hand gestures and by asking another set of participants to recreate the notated gestures. In short, this research proposes a novel and systematic approach to notate hand gesture commands.  相似文献   

6.
Traditionally, gesture-based interaction in virtual environments is composed of either static, posture-based gesture primitives or temporally analyzed dynamic primitives. However, it would be ideal to incorporate both static and dynamic gestures to fully utilize the potential of gesture-based interaction. To that end, we propose a probabilistic framework that incorporates both static and dynamic gesture primitives. We call these primitives Gesture Words (GWords). Using a probabilistic graphical model (PGM), we integrate these heterogeneous GWords and a high-level language model in a coherent fashion. Composite gestures are represented as stochastic paths through the PGM. A gesture is analyzed by finding the path that maximizes the likelihood on the PGM with respect to the video sequence. To facilitate online computation, we propose a greedy algorithm for performing inference on the PGM. The parameters of the PGM can be learned via three different methods: supervised, unsupervised, and hybrid. We have implemented the PGM model for a gesture set of ten GWords with six composite gestures. The experimental results show that the PGM can accurately recognize composite gestures.  相似文献   

7.
A gesture-based interaction system for smart homes is a part of a complex cyber-physical environment, for which researchers and developers need to address major challenges in providing personalized gesture interactions. However, current research efforts have not tackled the problem of personalized gesture recognition that often involves user identification. To address this problem, we propose in this work a new event-driven service-oriented framework called gesture services for cyber-physical environments (GS-CPE) that extends the architecture of our previous work gesture profile for web services (GPWS). To provide user identification functionality, GS-CPE introduces a two-phase cascading gesture password recognition algorithm for gesture-based user identification using a two-phase cascading classifier with the hidden Markov model and the Golden Section Search, which achieves an accuracy rate of 96.2% with a small training dataset. To support personalized gesture interaction, an enhanced version of the Dynamic Time Warping algorithm with multiple gestural input sources and dynamic template adaptation support is implemented. Our experimental results demonstrate the performance of the algorithm can achieve an average accuracy rate of 98.5% in practical scenarios. Comparison results reveal that GS-CPE has faster response time and higher accuracy rate than other gesture interaction systems designed for smart-home environments.  相似文献   

8.
运动传感驱动的3D直观手势交互   总被引:3,自引:1,他引:2  
为了使手势交互方式较少受到场地和光线的限制,提出利用加速度传感器作为输入设备进行手势识别的方法.对每种手势只要求用户做一次示范表演,通过添加噪声等手段来提高训练数据生成的自动化程度;将训练数据经过预处理和特征提取之后用于训练机器学习模型(隐马尔科夫模型和支持向量机).在包含70种手势的测试集上进行实验,平均识别率超过90%;并开发了幻灯片手势控制和手势拨号2个基于手势的人机交互原型系统,结果表明文中方法能够显著地提升用户在人机交互中的体验.  相似文献   

9.
This paper reports on the utility of gestures and speech to manipulate graphic objects. In the experiment described herein, three different populations of subjects were asked to communicate with a computer using either speech alone, gestures alone, or both. The task was the manipulation of a three-dimensional cube on the screen. They were asked to assume that the computer could see their hands, hear their voices, and understand their gestures and speech as well as a human could. A gesture classification scheme was developed to analyse the gestures of the subjects. A primary objective of the classification scheme was to determine whether common features would be found among the gestures of different users and classes of users. The collected data show a surprising degree of commonality among subjects in the use of gestures as well as speech. In addition to the uniformity of the observed manipulations, subjects expressed a preference for a combined gesture/speech interface. Furthermore, all subjects easily completed the simulated object manipulation tasks.The results of this research, and of future experiments of this type, can be applied to develop a gesture-based or gesture/speech-based system which enables computer users to manipulate graphic objects using easily learned and intuitive gestures to perform spatial tasks. Such tasks might include editing a three-dimensional rendering, controlling the operation of vehicles or operating virtual tools in three dimensions, or assembling an object from components. Knowledge about how people intuitively use gestures to communicate with computers provides the basis for future development of gesture-based input devices.  相似文献   

10.
Hand gestures have great potential to act as a computer interface in the entertainment environment. However, there are two major problems when implementing the hand gesture-based interface for multiple users, the complexity problem and the personalization problem. In order to solve these problems and implement multi-user data glove interface successfully, we propose an adaptive mixture-of-experts model for data-glove based hand gesture recognition models which can solve both the problems.The proposed model consists of the mixture-of-experts used to recognize the gestures of an individual user, and a teacher network trained with the gesture data from multiple users. The mixture-of-experts model is trained with an expectation-maximization (EM) algorithm and an on-line learning rule. The model parameters are adjusted based on the feedback received from the real-time recognition of the teacher network.The model is applied to a musical performance game with the data glove (5DT Inc.) as a practical example. Comparison experiments using several representative classifiers showed both outstanding performance and adaptability of the proposed method. Usability assessment completed by the users while playing the musical performance game revealed the usefulness of the data glove interface system with the proposed method.  相似文献   

11.
虚拟现实中的交互手势包括多种不同类型,层次化建模方法避免了采用单一模型导致效率不高的问题.识别是一个由粗到精的过程,通过滑动窗技术实时提取手势的统计特征,实现手势类别的粗略划分,然后采用不同方法对各类手势进行分析.交互环境及上下文信息用以辅助手势的类别划分,提高了识别效率.最后通过虚拟家居系统对该方法进行了验证.  相似文献   

12.
We present an intuitive, implicit, gesture based identification system suited for applications such as the user login to home multimedia services, with less strict security requirements. The term “implicit gesture” in this work refers to a natural physical hand manipulation of the control device performed by the user, who picks it up from its neutral motionless position or shakes it. For reference with other related systems, explicit and well defined identification gestures were used. Gestures were acquired by an accelerometer sensor equipped device in a form of the Nintendo WiiMote remote controller. A dynamic time warping method is used at the core of our gesture based identification system. To significantly increase the computational efficiency and temporal stability, the “super-gesture” concept was introduced, where acceleration features of multiple gestures are combined in only one super-gesture template per each user. User evaluation spanning over a period of 10 days and including 10 participants was conducted. User evaluation study results show that our algorithm ensures nearly 100 % recognition accuracy when using explicit identification signature gestures and between 88 % and 77 % recognition accuracy when the system needs to distinguish between 5 and 10 users, using the implicit “pick-up” gesture. Performance of the proposed system is comparable to the results of other related works when using explicit identification gestures, while showing that implicit gesture based identification is also possible and viable.  相似文献   

13.
We present a novel approach for recreating life-like experiences through an easy and natural gesture-based interaction. By focusing on the locations and transforming the role of the user, we are able to significantly maximise the understanding of an ancient cultural practice, behaviour or event over traditional approaches. Technology-based virtual environments that display object reconstructions, old landscapes, cultural artefacts, and scientific phenomena are coming into vogue. In traditional approaches the user is a visitor navigating through these virtual environments observing and picking objects. However, cultural practices and certain behaviours from nature are not normally made explicit and their dynamics still need to be understood. Thus, our research idea is to bring such practices to life by allowing the user to enact them. This means that user may re-live a step-by-step process to understand a practice, behaviour or event. Our solution is to enable the user to enact using gesture-based interaction with sensor-based technologies such as the versatile Kinect. This allows easier and natural ways to interact in multidimensional spaces such as museum exhibits. We use heuristic approaches and semantic models to interpret human gestures that are captured from the user’s skeletal representation. We present and evaluate three applications. For each of the three applications, we integrate these interaction metaphors with gaming elements, thereby achieving a gesture-set to enact a cultural practice, behaviour or event. User evaluation experiments revealed that our approach achieved easy and natural interaction with an overall enhanced learning experience.  相似文献   

14.

Continuous authentication modalities collect and utilize users’ sensitive data to authenticate them continuously. Such data contain information about user activities, behaviors, and other demographic information, which causes privacy concerns. In this paper, we propose two privacy-preserving protocols that enable continuous authentication while preventing the disclosure of user-sensitive information to an authentication server. We utilize homomorphic cryptographic primitives that protect the privacy of biometric features with an oblivious transfer protocol that enables privacy-preserving information retrieval. We performed the biometric evaluation of the proposed protocols on two datasets, a swipe gesture dataset and a keystroke dynamics dataset. The biometric evaluation shows that the protocols have very good performance. The execution time of the protocols is measured by considering continuous authentication using: only swipe gestures, keystroke dynamics, and hybrid modalities. The execution time proves the protocols are very efficient, even on high-security levels.

  相似文献   

15.
Upcoming mobile devices will have flexible displays, allowing us to explore alternate forms of user authentication. On flexible displays, users can interact with the device by deforming the surface of the display through bending. In this paper, we present Bend Passwords, a new type of user authentication that uses bend gestures as its input modality. We ran three user studies to evaluate the usability and security of Bend Passwords and compared it to PINs on a mobile phone. Our first two studies evaluated the creation and memorability of user-chosen and system-assigned passwords. The third study looked at the security problem of shoulder-surfing passwords on mobile devices. Our results show that bend passwords are a promising authentication mechanism for flexible display devices. We provide eight design recommendations for implementing Bend Passwords on flexible display devices.  相似文献   

16.
Despite the existence of advanced functions in smartphones, most blind people are still using old-fashioned phones with familiar layouts and dependence on tactile buttons. Smartphones support accessibility features including vibration, speech and sound feedback, and screen readers. However, these features are only intended to provide feedback to user commands or input. It is still a challenge for blind people to discover functions on the screen and to input the commands. Although voice commands are supported in smartphones, these commands are difficult for a system to recognize in noisy environments. At the same time, smartphones are integrated with sophisticated motion sensors, and motion gestures with device tilt have been gaining attention for eyes-free input. We believe that these motion gesture interactions offer more efficient access to smartphone functions for blind people. However, most blind people are not smartphone users and they are aware of neither the affordances available in smartphones nor the potential for interaction through motion gestures. To investigate the most usable gestures for blind people, we conducted a user-defined study with 13 blind participants. Using the gesture set and design heuristics from the user study, we implemented motion gesture based interfaces with speech and vibration feedback for browsing phone books and making a call. We then conducted a second study to investigate the usability of the motion gesture interface and user experiences using the system. The findings indicated that motion gesture interfaces are more efficient than traditional button interfaces. Through the study results, we provided implications for designing smartphone interfaces.  相似文献   

17.
Most gestural interaction studies on gesture elicitation have focused on hand gestures, and few have considered the involvement of other body parts. Moreover, most of the relevant studies used the frequency of the proposed gesture as the main index, and the participants were not familiar with the design space. In this study, we developed a gesture set that includes hand and non-hand gestures by combining the indices of gesture frequency, subjective ratings, and physiological risk ratings. We first collected candidate gestures in Experiment 1 through a user-defined method by requiring participants to perform gestures of their choice for 15 most commonly used commands, without any body part limitations. In Experiment 2, a new group of participants evaluated the representative gestures obtained in Experiment 1. We finally obtained a gesture set that included gestures made with the hands and other body parts. Three user characteristics were exhibited in this set: a preference for one-handed movements, a preference for gestures with social meaning, and a preference for dynamic gestures over static gestures.  相似文献   

18.
19.
At present, mobile devices such as tablet-type PCs and smart phones have widely penetrated into our daily lives. Therefore, an authentication method that prevents shoulder surfing is needed. We are investigating a new user authentication method for mobile devices that uses surface electromyogram (s-EMG) signals, not screen touching. The s-EMG signals, which are detected over the skin surface, are generated by the electrical activity of muscle fibers during contraction. Muscle movement can be differentiated by analyzing the s-EMG. Taking advantage of the characteristics, we proposed a method that uses a list of gestures as a password in the previous study. In this paper, we introduced support vector machines (SVM) for improvement of the method of identifying gestures. A series of experiments was carried out to evaluate the performance of the SVM based method as a gesture classifier and we also discussed its security.  相似文献   

20.
Abstract

Alphanumeric passwords remain a ubiquitous means of user authentication, yet they are plagued by a fundamental problem: Secure passwords are difficult to create and remember. This paper suggests that image- or gesture-based passwords might strike a better balance between security and usability. It examines two such systems that are currently in widespread commercial use and examines alternative approaches that may offer insights for future improvements. Finally, it considers the possibility that touch-screen gesture passwords may become a viable biometric measure, which may allow them to provide multi-factor gesture-based authentication.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号