首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

This paper proposes an unobtrusive and calibration-free framework towards eye gaze tracking based interactive directional control interface for desktop environment using simple webcam under unconstrained settings. The proposed eye gaze tracking involved hybrid approach designed by combining two different techniques based upon both supervised and unsupervised methods wherein the unsupervised image gradients method computes the iris centers over the eye regions extracted by the supervised regression based algorithm. Experiments performed by the proposed hybrid approach to detect eye regions along with iris centers over challenging face image datasets exhibited exciting results. Similar approach for eye gaze tracking worked well in real-time by using a simple web camera. Further, PC based interactive directional control interface based upon iris position has been designed that works without needing any prior calibrations unlike other Infrared illumination based eye trackers.

The proposed work may be useful to the people with full body motor disabilities, who need interactive and unobtrusive eye gaze control based applications to live independently.

  相似文献   

2.
Nowadays home automation, with its increased availability, reliability and with its ever reducing costs is gaining momentum and is starting to become a viable solution for enabling people with disabilities to autonomously interact with their homes and to better communicate with other people. However, especially for people with severe mobility impairments, there is still a lack of tools and interfaces for effective control and interaction with home automation systems, and general-purpose solutions are seldom applicable due to the complexity, asynchronicity, time dependent behavior, and safety concerns typical of the home environment. This paper focuses on user-environment interfaces based on the eye tracking technology, which often is the only viable interaction modality for users as such. We propose an eye-based interface tackling the specific requirements of smart environments, already outlined in a public Recommendation issued by the COGAIN European Network of Excellence. The proposed interface has been implemented as a software prototype based on the ETU universal driver, thus being potentially able to run on a variety of eye trackers, and it is compatible with a wide set of smart home technologies, handled by the Domotic OSGi Gateway. A first interface evaluation, with user testing sessions, has been carried and results show that the interface is quite effective and usable without discomfort by people with almost regular eye movement control.  相似文献   

3.
Eye typing provides a means of communication that is especially useful for people with disabilities. However, most related research addresses technical issues in eye typing systems, and largely ignores design issues. This paper reports experiments studying the impact of auditory and visual feedback on user performance and experience. Results show that feedback impacts typing speed, accuracy, gaze behavior, and subjective experience. Also, the feedback should be matched with the dwell time. Short dwell times require simplified feedback to support the typing rhythm, whereas long dwell times allow extra information on the eye typing process. Both short and long dwell times benefit from combined visual and auditory feedback. Six guidelines for designing feedback for gaze-based text entry are provided.  相似文献   

4.
Attentive user interfaces (AUIs) capitalize on the rich information that can be obtained from users’ gaze behavior in order to infer relevant aspects of their cognitive state. Not only is eye gaze an excellent clue to states of interest and intention, but also to preference and confidence in comprehension. AUIs are built with the aim of adapting the interface to the user’s current information need, and thus reduce workload of interaction. Given those characteristics, it is believed that AUIs can have particular benefits for users with severe disabilities, for whom operating a physical device (like a mouse pointer) might be very strenuous or infeasible. This paper presents three studies that attempt to gauge uncertainty and intention on the part of the user from gaze data, and compare the success of each approach. The paper discusses how the application of the approaches adopted in each study to user interfaces can support users with severe disabilities.  相似文献   

5.
Gaze-control enables people to control a computer by using eye-gaze to select items on screen. Gaze-control is a necessity for people who have lost all motor control of their body and only have control over eye movements. In addition, gaze-control can be the quickest and least tiring option for a far broader group of people with varying disabilities. This paper reports findings from gaze-control user trials involving users from both groups: people who are totally paralyzed, as well as people with a wide range of complex disabilities. The trials conducted involved four different centres supporting people with disabilities in three different European countries. Several gaze-control systems were trialled by a large number of users with varying needs and abilities. The perceived benefits of gaze-control are described, and recommendations for successful assessment and implementation of gaze-control are provided.  相似文献   

6.
This paper describes a behavioural model used to simulate realistic eye‐gaze behaviour and body animations for avatars representing participants in a shared immersive virtual environment (IVE). The model was used in a study designed to explore the impact of avatar realism on the perceived quality of communication within a negotiation scenario. Our eye‐gaze model was based on data and studies carried out on the behaviour of eye‐gaze during face‐to‐face communication. The technical features of the model are reported here. Information about the motivation behind the study, experimental procedures and a full analysis of the results obtained are given in [ 17 ].  相似文献   

7.
Reduced uncertainty through human communication in complex environments   总被引:1,自引:0,他引:1  
This paper describes and analyzes the central role of human–human communication in a dynamic, high-risk environment. The empirical example is a UN peace-enforcing and peace-keeping operation where uncertainty about the situation in the environment and about the own organization’s capability was intertwined, requiring extensive control activities and, hence, special attention to communication between humans. Theoretically, focus lays on what efficient communication means, how to understand and use social relations, and use technology when making socio-technical systems also cooperative systems. We conclude that “control” largely is based on the ability to communicate and that efficient human–human communication is grounded in relations between individuals, which preferably should be based on physical meetings. Uncertainty, and how humans cope with it through interpersonal communication, is exemplified and discussed. In theoretical terms, relating the study to systems science and its application in organizational life and cognitive engineering, the case illustrates that an organization is not only an economy but also an adaptive social structure. But neither cognition nor control is an end state. The organization’s raison d’être in this kind of operation is cooperation rather than confrontation. Its use of force is strictly regulated by Rules of Engagement (ROE). In the organization, strong emotions may govern, interpersonal trust can be established and rule-sets for further cooperation established. Without considering the power of such aspects, economical rationality and detached cognitive thinking may end up in perfect, but less relevant, support technologies where people act in roles rather than as wholes.  相似文献   

8.
In this work we elaborate on a novel image-based system for creating video-realistic eye animations to arbitrary spoken output. These animations are useful to give a face to multimedia applications such as virtual operators in dialog systems. Our eye animation system consists of two parts: eye control unit and rendering engine, which synthesizes eye animations by combining 3D and image-based models. The designed eye control unit is based on eye movement physiology and the statistical analysis of recorded human subjects. As already analyzed in previous publications, eye movements vary while listening and talking. We focus on the latter and are the first to design a new model which fully automatically couples eye blinks and movements with phonetic and prosodic information extracted from spoken language. We extended the already known simple gaze model by refining mutual gaze to better model human eye movements. Furthermore, we improved the eye movement models by considering head tilts, torsion, and eyelid movements. Mainly due to our integrated blink and gaze model and to the control of eye movements based on spoken language, subjective tests indicate that participants are not able to distinguish between real eye motions and our animations, which has not been achieved before.  相似文献   

9.
Robotic smart house to assist people with movement disabilities   总被引:1,自引:0,他引:1  
This paper introduces a new robotic smart house, Intelligent Sweet Home, developed at KAIST in Korea, which is based on several robotic agents and aims at testing advanced concepts for independent living of the elderly and people with disabilities. The work focuses on technical solutions for human-friendly assistance in motion/mobility and advanced human-machine interfaces that provide simple control of all assistive robotic systems and home-installed appliances. The smart house concept includes an intelligent bed, intelligent wheelchair, and robotic hoist for effortless transfer of the user between bed and wheelchair. The design solutions comply with most of the users’ requirements and suggestions collected by a special questionnaire survey of people with disabilities. The smart house responds to the user's commands as well as to the recognized intentions of the user. Various interfaces, based on hand gestures, voice, body movement, and posture, have been studied and tested. The paper describes the overall system structure and explains the design and functionality of some main system components.  相似文献   

10.
Safety, legibility and efficiency are essential for autonomous mobile robots that interact with humans. A key factor in this respect is bi-directional communication of navigation intent, which we focus on in this article with a particular view on industrial logistic applications. In the direction robot-to-human, we study how a robot can communicate its navigation intent using Spatial Augmented Reality (SAR) such that humans can intuitively understand the robot’s intention and feel safe in the vicinity of robots. We conducted experiments with an autonomous forklift that projects various patterns on the shared floor space to convey its navigation intentions. We analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift and carried out stimulated recall interviews (SRI) in order to identify desirable features for projection of robot intentions. In the direction human-to-robot, we argue that robots in human co-habited environments need human-aware task and motion planning to support safety and efficiency, ideally responding to people’s motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond what can be inferred from the trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. In this work, we investigate the possibility of human-to-robot implicit intention transference solely from eye gaze data and evaluate how the observed eye gaze patterns of the participants relate to their navigation decisions. We again analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift for clues that could reveal direction intent. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human gaze for early obstacle avoidance.  相似文献   

11.
For people with upper limb disabilities visual art is an important activity that allows for expression of individuality and independence. They show remarkable endurance, patience and determination to adapt their remaining capabilities to create visual art. There are significant advantages of digital technologies in assisting artists with upper limb disabilities. Paralinguistic voice recognition technologies have proven to be a particularly promising mode of interaction. Despite these benefits, technological support for people with upper limb disabilities to create visual art is scarce. This paper reports on a number of case studies of several artists with upper limb disabilities. These case studies illustrate the struggles they face to be creative and also show the significant advantages of digital technologies in assisting such artists. An investigation into people’s ability to use the volume of their voice to control cursor movement to create drawings on the screen is also reported. With motivation, training and practise, use of volume to control drawing tasks shows great promise. It is believed that paralinguistic voice has wider implications beyond assisting artists with upper limb disabilities, such as: an alternative mode of interaction for disabled people to perform tasks other than creating visual art, alternative mode of interaction for hands busy environments and as a voice training system for people with speech impairments.  相似文献   

12.
Our work addresses one of the core issues related to Human Computer Interaction (HCI) systems that use eye gaze as an input. This issue is the sensor, transmission and other delays that exist in any eye tracker-based system, reducing its performance. A delay effect can be compensated by an accurate prediction of the eye movement trajectories. This paper introduces a mathematical model of the human eye that uses anatomical properties of the Human Visual System to predict eye movement trajectories. The eye mathematical model is transformed into a Kalman filter form to provide continuous eye position signal prediction during all eye movement types. The model presented in this paper uses brainstem control properties employed during transitions between fast (saccade) and slow (fixations, pursuit) eye movements. Results presented in this paper indicate that the proposed eye model in a Kalman filter form improves the accuracy of eye movement prediction and is capable of a real-time performance. In addition to the HCI systems with the direct eye gaze input, the proposed eye model can be immediately applied for a bit-rate/computational reduction in real-time gaze-contingent systems  相似文献   

13.
This paper introduces the concept of enabling gaze-based interaction for users with high-level motor disabilities to control an avatar in a first-person perspective on-line community. An example community, Second Life, is introduced that could offer disabled users the same virtual freedom as any other user, and so allow disabled users to be able-bodied (should they wish) within the virtual world. A survey of the control demands for Second Life and a subsequent preliminary experiment show that gaze control has inherent problems particularly for locomotion and camera movement. These problems result in a lack of effective gaze control of Second Life, such that control is not practical and show that disabled users who interact using gaze will have difficulties in controlling Second Life (and similar environments). This suggests that these users could once again become disabled in the virtual world by the difficulties in effectively controlling their avatars, and their ‘disability privacy’, or the right to control an avatar as effectively as an able bodied user, and so appear virtually able bodied, will be compromised. Methods for overcoming these difficulties such as the use of gaze aware on-screen assistive tools could overcome these problems, but games manufacturers must design inclusively, so that disabled users may have the right to disability privacy in their Second (virtual) Lives.  相似文献   

14.
In this paper we present a novel mechanism to obtain enhanced gaze estimation for subjects looking at a scene or an image. The system makes use of prior knowledge about the scene (e.g. an image on a computer screen), to define a probability map of the scene the subject is gazing at, in order to find the most probable location. The proposed system helps in correcting the fixations which are erroneously estimated by the gaze estimation device by employing a saliency framework to adjust the resulting gaze point vector. The system is tested on three scenarios: using eye tracking data, enhancing a low accuracy webcam based eye tracker, and using a head pose tracker. The correlation between the subjects in the commercial eye tracking data is improved by an average of 13.91%. The correlation on the low accuracy eye gaze tracker is improved by 59.85%, and for the head pose tracker we obtain an improvement of 10.23%. These results show the potential of the system as a way to enhance and self-calibrate different visual gaze estimation systems.  相似文献   

15.
This article describes an online eye-tracking method based on an electrooculogram (EOG) to estimate gaze position. The objective is to use a biomedical signal EOG as the input of a human-machine interface for both disabled and healthy people. In this study, features of horizontal and vertical EOG signals were extracted to estimate the gaze position. Compensation for time-shift and nonlinearity were applied to improve the performance of the estimation. The estimated locus of the moving eye was compared with the locus of the target, and the deviation was about 3 cm. The results can be applied in some raw tracking fields, such as online communication, EOG-based pointers, and so on.  相似文献   

16.
Brain–computer interfaces (BCI) have potential to provide a new channel of communication and control for people with severe motor disabilities. Although many empirical studies exist, few have specifically evaluated the impact of contributing factors on user performance and perception in BCI applications, especially for users with motor disabilities. This article reports the effects of luminosity contrast and stimulus duration on user performance and usage preference in a P300-based BCI application, P300 Speller. Ten participants with neuromuscular disabilities (amyotrophic lateral sclerosis and cerebral palsy) and 10 able-bodied participants were asked to spell six 10-character phrases in the P300 Speller. The overall accuracy was 76.5% for the able-bodied participants and 26.8% for participants with motor disabilities. The results showed that luminosity contrast and stimulus duration have significant effects on user performance. In addition, participants preferred high luminosity contrast with middle or short stimulus duration. However, these effects on user performance and preference varied for participants with and without motor disabilities. The results also indicated that although most participants with motor disabilities can establish BCI control, BCI illiteracy does exist. These results of the study should provide insights into the future research of the BCI systems, especially the real-world applicability of the BCI applications as a nonmuscular communication and control system for people with severe motor disabilities.  相似文献   

17.
The increasing use of animated characters and avatars in computer games and 3D online worlds requires increasingly complex behaviour with increasingly simple and easy to use control systems. This paper presents a system for user-controlled actions that aims at simplicity and ease of use while being enhanced by modern animation techniques to produce rich and complex behaviour. We use inverse kinematics based motion adaptation to make pre-existing pieces of motion apply to new targets. The expressiveness of the character is enhanced by adding autonomous behaviour, in this case eye gaze behaviour. This behaviour is generated autonomously but is still influenced by the actions that the user is requesting the character to perform. The actions themselves are simple for a designer with no programming experience to design and for an end user to customise. They are also very simple to invoke.  相似文献   

18.
孤独症谱系障碍(autism spectrum disorder,ASD)是一类以社会交流、刻板行为和狭隘兴趣为主要特征的神经发育障碍性疾病,致残率较高,严重影响着儿童的健康成长。ASD 主观临床诊断存在耗时长、主观性强等问题。因此,迫切需要一种快速、经济、有效的客观筛查方法。研究发现,ASD 儿童具有非典型的情绪视觉感知模式,有望将眼动追踪技术用于 ASD 的辅助诊断。该文提出一个在自然场景下,ASD 非典型情绪视觉感知模式结合机器学习的自动筛查 ASD 患者的模型。该模型可提取自然场景下感知情绪的眼动轨迹特征,通过机器学习模型进行建模,以实现根据眼动轨迹自动识别 ASD 患儿。实验结果表明,该方法的准确率为 79.71%,有望成为一种 ASD 儿童早期筛查的辅助工具。  相似文献   

19.
This paper proposes a new gaze-detection method based on a 3-D eye position and the gaze vector of the human eyeball. Seven new developments compared to previous works are presented. First, a method of using three camera systems, i.e., one wide-view camera and two narrow-view cameras, is proposed. The narrow-view cameras use autozooming, focusing, panning, and tilting procedures (based on the detected 3-D eye feature position) for gaze detection. This allows for natural head and eye movement by users. Second, in previous conventional gaze-detection research, one or multiple illuminators were used. These studies did not consider specular reflection (SR) problems, which were caused by the illuminators when working with users who wore glasses. To solve this problem, a method based on dual illuminators is proposed in this paper. Third, the proposed method does not require user-dependent calibration, so all procedures for detecting gaze position operate automatically without human intervention. Fourth, the intrinsic characteristics of the human eye, such as the disparity between the pupillary and the visual axes in order to obtain accurate gaze positions, are considered. Fifth, all the coordinates obtained by the left and right narrow-view cameras, as well as the wide-view camera coordinates and the monitor coordinates, are unified. This simplifies the complex 3-D converting calculation and allows for calculation of the 3-D feature position and gaze position on the monitor. Sixth, to upgrade eye-detection performance when using a wide-view camera, the adaptive-selection method is used. This involves an IR-LED on/off scheme, an AdaBoost classifier, and a principle component analysis method based on the number of SR elements. Finally, the proposed method uses an eigenvector matrix (instead of simply averaging six gaze vectors) in order to obtain a more accurate final gaze vector that can compensate for noise. Experimental results show that the root mean square error of gaze detection was about 0.627 cm on a 19-in monitor. The processing speed of the proposed method (used to obtain the gaze position on the monitor) was 32 ms (using a Pentium IV 1.8-GHz PC). It was possible to detect the user's gaze position at real-time speed.  相似文献   

20.
Gaze interaction is a promising input modality for people who are unable to control their fingers and arms. This paper suggests a number of new metrics that can be applied to the analysis of gaze typing interfaces and to the evaluation of user performance. These metrics are derived from a close examination of eight subjects typing text by gazing on a dwell-time activated onscreen keyboard during a seven-day experiment. One of the metrics, termed “Attended keys per character”, measures the number of keys that are attended for each typed character. This metric turned out to be particularly well correlated to the actual numbers of errors committed (r = 0.915). In addition to introducing metrics specific for gaze typing, the paper discusses how the metrics could make remote progress monitoring possible and provides some general advice on how to introduce gaze typing for novice users.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号