首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 811 毫秒
1.
迟健男  张闯  胡涛  颜艳桃  刘洋 《控制与决策》2009,24(9):1345-1350

基于单视觉主动红外光源系统,提出了一种视线检测方法.在眼部特征检测阶段,采用投影法定位人脸;根据人脸对称性和五官分布的先验知识,确定瞳孔潜在区域;最后进行人眼特征的精确分割.在视线方向建模阶段,首先在头部静止的情况下采用非线性多项式建立从平面视线参数到视线落点的映射模型;然后采用广义回归神经网络对不同头部位置造成的视线偏差进行补偿,使非线性映射函数扩展到任何头部位置.实验结果及在交互式图形界面系统中的应用验证了该方法的有效性.

  相似文献   

2.
眼睛包含了多种重要的信息,眼睛定位在人脸检测及人脸识别技术中有着极其重要的实用价值。但传统的基于几何特征的眼睛定位方法在多角度视线方向上存在着不同程度的定位不够准确或计算量大的缺点,文章提出了一种多角度视线方向上的眼睛定位方法。在初步定位人脸区域后,采用主成分分析(Principal Component Analysis,PCA)方法在人脸的上半部分计算眼睛的位置。实验证明了算法的有效性。  相似文献   

3.
基于特征的视线跟踪方法研究综述   总被引:2,自引:0,他引:2  
针对基于特征的视线跟踪方法进行了综述.首先对视线跟踪技术的发展、相关研究工作和研究现状进行了阐述; 然后将基于特征的视线跟踪方法分成了两大类:二维视线跟踪方法和三维视线跟踪方法, 从硬件系统配置、误差主要来源、头部运动影响、优缺点等多个方面重点分析了这两类视线跟踪方法, 对近五年现有的部分基于特征的视线跟踪方法进行了对比分析, 并对二维视线跟踪系统和三维视线跟踪系统中的几个关键问题进行了探讨; 此外, 介绍了视线跟踪技术在人机交互、医学、军事、智能交通等多个领域的应用; 最后对基于特征的视线跟踪方法的发展趋势和研究热点进行了总结与展望.  相似文献   

4.
目的 视线追踪是人机交互的辅助系统,针对传统的虹膜定位方法误判率高且耗时较长的问题,本文提出了一种基于人眼几何特征的视线追踪方法,以提高在2维环境下视线追踪的准确率。方法 首先通过人脸定位算法定位人脸位置,使用人脸特征点检测的特征点定位眼角点位置,通过眼角点计算出人眼的位置。直接使用虹膜中心定位算法的耗时较长,为了使虹膜中心定位的速度加快,先利用虹膜图片建立虹膜模板,然后利用虹膜模板检测出虹膜区域的位置,通过虹膜中心精定位算法定位虹膜中心的位置,最后提取出眼角点、虹膜中心点等信息,对点中包含的角度信息、距离信息进行提取,组合成眼动向量特征。使用神经网络模型进行分类,建立注视点映射关系,实现视线的追踪。通过图像的预处理对图像进行增强,之后提取到了相对的虹膜中心。提取到需要的特征点,建立相对稳定的几何特征代表眼动特征。结果 在普通的实验光照环境中,头部姿态固定的情况下,识别率最高达到98.9%,平均识别率达到95.74%。而当头部姿态在限制区域内发生变化时,仍能保持较高的识别率,平均识别率达到了90%以上。通过实验分析发现,在头部变化的限制区域内,本文方法具有良好的鲁棒性。结论 本文提出使用模板匹配与虹膜精定位相结合的方法来快速定位虹膜中心,利用神经网络来对视线落点进行映射,计算视线落点区域,实验证明本文方法具有较高的精度。  相似文献   

5.
文中提出一种基于瞳孔—角膜反射(PCCR)的视线估计方法(GEMHSSO).针对现有PCCR存在的主要问题:限制使用者头部运动和个体标定问题,提出了一种单相机单光源条件下头部位置的补偿方法,实现了头部位置变化对瞳孔角膜向量影响的解析补偿,并建立一种个体差异的转化模型,进而简化标定过程为单点标定.以此为基础形成一种新的视线估计方法,本方法使精确视线估计的最小硬件要求降低到单相机(未标定)单光源,既不需要繁杂的系统标定,又实现了自然头动视线估计,并且简化用户标定为单点标定.该方法的各个环节都满足实时性要求,为面向人机交互的视线追踪系统提供了有效的解决方案.  相似文献   

6.
赵昕晨  杨楠 《计算机应用》2020,40(11):3295-3299
实时视线跟踪技术是智能眼动操作系统的关键技术。与基于眼动仪的技术相比,基于网络摄像头的技术具有低成本、高通用性等优点。针对现有的基于摄像头的算法只考虑眼部图像特征、准确度较低的问题,提出引入头部姿态分析的视线追踪算法优化技术。首先,通过人脸特征点检测结果构建头部姿态特征,为标定数据提供头部姿态上下文;然后,研究了新的相似度算法,计算头部姿态上下文的相似度;最后,在进行视线追踪时,利用头部姿态相似度对校准数据进行过滤,从标定数据集中选取与当前输入帧头部姿态相似度较高的数据进行预测。在选取不同特征人群的数据上进行了大量实验,对比实验结果显示,与WebGazer相比,所提算法的平均误差降低了58~63 px。所提算法能有效提高追踪结果的准确性和稳定性,拓展了摄像头设备在视线追踪领域的应用场景。  相似文献   

7.
研究了安全驾驶中驾驶员视线方向的分析与判断;在初步得到人脸区域的情况下采用Canny算子进行边缘检测,再利用Hough变换定位瞳孔,最后对驾驶员视线方向进行分析和判断;通过大量数据的研究以及函数图像拟合,得出了瞳孔位置与偏转角度的关系,并由此得到注意力分散的阈值,当超出此范围时检测系统报警,并且通过实验证明了该方法对注意力分散监测的实时性和有效性。  相似文献   

8.
赵昕晨  杨楠 《计算机应用》2005,40(11):3295-3299
实时视线跟踪技术是智能眼动操作系统的关键技术。与基于眼动仪的技术相比,基于网络摄像头的技术具有低成本、高通用性等优点。针对现有的基于摄像头的算法只考虑眼部图像特征、准确度较低的问题,提出引入头部姿态分析的视线追踪算法优化技术。首先,通过人脸特征点检测结果构建头部姿态特征,为标定数据提供头部姿态上下文;然后,研究了新的相似度算法,计算头部姿态上下文的相似度;最后,在进行视线追踪时,利用头部姿态相似度对校准数据进行过滤,从标定数据集中选取与当前输入帧头部姿态相似度较高的数据进行预测。在选取不同特征人群的数据上进行了大量实验,对比实验结果显示,与WebGazer相比,所提算法的平均误差降低了58~63 px。所提算法能有效提高追踪结果的准确性和稳定性,拓展了摄像头设备在视线追踪领域的应用场景。  相似文献   

9.
丁一芸 《信息与电脑》2023,(24):120-122
为实时追踪用户的眼睛运动轨迹,设计一种基于人脸视线追踪技术的嵌入式人机监护系统。系统采用ARM7系列芯片作为核心硬件,利用人脸检测算法及视线追踪方法实现嵌入式人机监护系统的监护功能。实验结果表明,该系统追踪到的坐标与实际坐标一致,可以实现精准追踪。  相似文献   

10.
为了提取人眼的高精度亚像素特征参数,利用亮瞳现象,提出了一种基于多通道图像的高精度亚像素特征参数提取方法。该方法首先通过差分图像滤波获得瞳孔区域,进而检测瞳孔区域的边缘,并在眼睛区域附近基于灰度,搜索角膜反射区域; 然后求取其质心用于定位角膜反射区域中心,并对瞳孔边缘做滤波,以消除角膜反射对瞳孔边缘轮廓的影响,进而利用椭圆拟合来定位瞳孔中心;最后提取包括人眼特征和人脸位置的多个参数,另外,还建立了一个多特征参数提取的流程,为下一步的视线估计提供了参数依据。 实验结果及视线追踪系统最终的视线估计结果证明,该方法是有效的。  相似文献   

11.
Head gaze, or the orientation of the head, is a very important attentional cue in face to face conversation. Some subtleties of the gaze can be lost in common teleconferencing systems, because a single perspective warps spatial characteristics. A recent random hole display is a potentially interesting display for group conversation, as it allows multiple stereo viewers in arbitrary locations, without the restriction of conventional autostereoscopic displays on viewing positions. We represented a remote person as an avatar on a random hole display. We evaluated this system by measuring the ability of multiple observers with different horizontal and vertical viewing angles to accurately and simultaneously judge which targets the avatar is gazing at. We compared three perspective conditions: a conventional 2D view, a monoscopic perspective-correct view, and a stereoscopic perspective-correct views. In the latter two conditions, the random hole display shows three and six views simultaneously. Although the random hole display does not provide high quality view, because it has to distribute display pixels among multiple viewers, the different views are easily distinguished. Results suggest the combined presence of perspective-correct and stereoscopic cues significantly improved the effectiveness with which observers were able to assess the avatar׳s head gaze direction. This motivates the need for stereo in future multiview displays.  相似文献   

12.
When estimating human gaze directions from captured eye appearances, most existing methods assume a fixed head pose because head motion changes eye appearance greatly and makes the estimation inaccurate. To handle this difficult problem, in this paper, we propose a novel method that performs accurate gaze estimation without restricting the user's head motion. The key idea is to decompose the original free-head motion problem into subproblems, including an initial fixed head pose problem and subsequent compensations to correct the initial estimation biases. For the initial estimation, automatic image rectification and joint alignment with gaze estimation are introduced. Then compensations are done by either learning-based regression or geometric-based calculation. The merit of using such a compensation strategy is that the training requirement to allow head motion is not significantly increased; only capturing a 5-s video clip is required. Experiments are conducted, and the results show that our method achieves an average accuracy of around 3° by using only a single camera.  相似文献   

13.
ABSTRACT

Eye gaze is considered to be a particularly important non-verbal communication cue. Gaze research is also becoming a hot topic in human–robot interaction (HRI). However, research on social eye gaze for HRI focuses mainly on human-like robots. There remains a lack of methods for functional robots, which are constrained in appearance, to show gaze-like behavior. In this work, we investigate how we can implement gaze behavior in functional robots to assist humans in reading their intent. We explore design implications based on LED lights as we consider LEDs to be easily installed in most robots while not introducing features that are too human-like (to prevent users from having high expectations towards the robots). In this paper, we first developed a design interface that allows designers to freely test different parameter settings for an LED-based gaze display for a Roomba robot. We summarized design principles for well simulating LED-based gazes. Our suggested design is further evaluated by a large group of participants with regard to their perception and interpretation of the robot's behaviors. On the basis of the findings, we finally offer a set of design implications that can be beneficial to HRI and HCI researchers.  相似文献   

14.
15.
Eye and gaze tracking for interactive graphic display   总被引:11,自引:0,他引:11  
This paper describes a computer vision system based on active IR illumination for real-time gaze tracking for interactive graphic display. Unlike most of the existing gaze tracking techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person, our gaze tracker can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using generalized regression neural networks (GRNNs). With GRNNs, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Furthermore, the mapping function can generalize to other individuals not used in the training. To further improve the gaze estimation accuracy, we employ a hierarchical classification scheme that deals with the classes that tend to be misclassified. This leads to a improvement in classification error. The angular gaze accuracy is about horizontally and vertically. The effectiveness of our gaze tracker is demonstrated by experiments that involve gaze-contingent interactive graphic display.Received: 21 July 2002, Accepted: 3 February 2004, Published online: 8 June 2004 Correspondence to: Qiang Ji  相似文献   

16.
This article describes real-time gaze control using position-based visual servoing. The main control objective of the system is to enable a gaze point to track the target so that the image feature of the target is located at each image center. The overall system consists of two parts: the vision process and the control system. The vision system extracts a predefined color feature from images. An adaptive look-up table method is proposed in order to get the 3-D position of the feature within the video frame rate under varying illumination. An uncalibrated camera raises the problem of the reconstructed 3-D positions not being correct. To solve the calibration problem in the position-based approach, we constructed an end-point closed-loop system using an active head-eye system. In the proposed control system, the reconstructed position error is used with a Jacobian matrix of the kinematic relation. The system stability is locally guaranteed, like image-based visual servoing, and the gaze position was shown to converge to the feature position. The proposed approach was successfully applied to a tracking task with a moving target in some simulations and some real experiments. The processing speed satisfies the property of real time. This work was presented in part at the Sixth International Symposium on Artificial Life and Robotics, Tokyo, January 15–17, 2001  相似文献   

17.
We introduce a system to compute both head orientation and gaze detection from a single image. The system uses a camera with fixed parameters and requires no user calibration. Our approach to head orientation is based on a geometrical model of the human face, and is derived form morphological and physiological data. Eye gaze detection is based on a geometrical model of the human eye. Two new algorithms are introduced that require either two or three feature points to be extracted from each image. Our algorithms are robust and run in real-time on a typical PC, which makes our system useful for a large variety of needs, from driver attention monitoring to machine-human interaction.  相似文献   

18.
Abstract This paper investigates the role of gaze and gesture when subjects are collaboratively solving physics problems with a computer. The results from the study indicate that females interacting together as pairs do use gaze significantly more than male pairs or those of mixed gender. There is evidence that gaze occurs during the planning stages of the problem solving activities and occurs more frequently by the speaker rather than the hearer during this phase. Mutual gazing occurs at this time too. The main finding is that differences in non-verbal communication strategies with respect to gender grouping effect not only the strategies that progress the collaborative process but more importantly also those that influence the understanding of the problem space. These results suggest the quality of video linkage will play an important role in collaborative problem solving for distance learners.  相似文献   

19.

This paper proposes an unobtrusive and calibration-free framework towards eye gaze tracking based interactive directional control interface for desktop environment using simple webcam under unconstrained settings. The proposed eye gaze tracking involved hybrid approach designed by combining two different techniques based upon both supervised and unsupervised methods wherein the unsupervised image gradients method computes the iris centers over the eye regions extracted by the supervised regression based algorithm. Experiments performed by the proposed hybrid approach to detect eye regions along with iris centers over challenging face image datasets exhibited exciting results. Similar approach for eye gaze tracking worked well in real-time by using a simple web camera. Further, PC based interactive directional control interface based upon iris position has been designed that works without needing any prior calibrations unlike other Infrared illumination based eye trackers.

The proposed work may be useful to the people with full body motor disabilities, who need interactive and unobtrusive eye gaze control based applications to live independently.

  相似文献   

20.
Eye contact and gaze awareness play a significant role for conveying emotions and intentions during face-to-face conversation. Humans can perceive each other's gaze quite naturally and accurately. However, the gaze awareness/perception are ambiguous during video teleconferencing performed by computer-based devices (such as laptops, tablet, and smart-phones). The reasons for this ambiguity are the (i) camera position relative to the screen and (ii) 2D rendition of 3D human face i.e., the 2D screen is unable to deliver an accurate gaze during video teleconferencing. To solve this problem, researchers have proposed different hardware setups with complex software algorithms. The most recent solution for accurate gaze perception employs 3D interfaces, such as 3D screens and 3D face-masks. However, today commonly used video teleconferencing devices are smart devices with 2D screens. Therefore, there is a need to improve gaze awareness/perception in these smart devices. In this work, we have revisited the question; how to improve a remote user's gaze awareness among his/her collaborators. Our hypothesis is that ‘an accurate gaze perception can be achieved by the3D embodimentof a remote user's head gesture during video teleconferencing’. We have prototyped an embodied telepresence system (ETS) for the 3D embodiment of a remote user's head. Our ETS is based on a 3-DOF neck robot with a mounted smart device (tablet PC). The electromechanical platform in combination with a smart device is a novel setup that is used for studying gaze awareness/perception in 2D screen-based smart devices during video teleconferencing. Two important gaze-related issues are considered in this work; namely (i) ‘Mona-Lisa Gaze Effect’ – the gaze is always directed at the person independent of his position in the room, and (ii) ‘Gaze Awareness/Faithfulness’ – the ability to perceive an accurate spatial relationship between the observing person and the object by an actor. Our results confirm that the 3D embodiment of a remote user head not only mitigates the Mona Lisa gaze effect but also supports three levels of gaze faithfulness, hence, accurately projecting the human gaze in distant space.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号