共查询到20条相似文献,搜索用时 15 毫秒
1.
Li Zhang Ming Jiang Dewan Farid M.A. Hossain 《Expert systems with applications》2013,40(13):5160-5168
Automatic perception of human affective behaviour from facial expressions and recognition of intentions and social goals from dialogue contexts would greatly enhance natural human robot interaction. This research concentrates on intelligent neural network based facial emotion recognition and Latent Semantic Analysis based topic detection for a humanoid robot. The work has first of all incorporated Facial Action Coding System describing physical cues and anatomical knowledge of facial behaviour for the detection of neutral and six basic emotions from real-time posed facial expressions. Feedforward neural networks (NN) are used to respectively implement both upper and lower facial Action Units (AU) analysers to recognise six upper and 11 lower facial actions including Inner and Outer Brow Raiser, Lid Tightener, Lip Corner Puller, Upper Lip Raiser, Nose Wrinkler, Mouth Stretch etc. An artificial neural network based facial emotion recogniser is subsequently used to accept the derived 17 Action Units as inputs to decode neutral and six basic emotions from facial expressions. Moreover, in order to advise the robot to make appropriate responses based on the detected affective facial behaviours, Latent Semantic Analysis is used to focus on underlying semantic structures of the data and go beyond linguistic restrictions to identify topics embedded in the users’ conversations. The overall development is integrated with a modern humanoid robot platform under its Linux C++ SDKs. The work presented here shows great potential in developing personalised intelligent agents/robots with emotion and social intelligence. 相似文献
2.
OBJECTIVE: The present study compared the effectiveness of a virtual reality (VR) simulator for training phlebotomy with that of a more traditional approach using simulated limbs. BACKGROUND: Phlebotomy, or drawing blood, is one of the most common medical procedures; yet, there are no universal standards for training and assessing performance. The absence of any standards can lead to injuries and inaccurate test results if the procedure is improperly performed. METHOD: Twenty 3rd-year medical students were trained under one of the two methods and had their performance assessed with a 28-item checklist. RESULTS: The results showed that performance improvements were limited to those who trained with the simulated limbs, and a detailed comparison of the two systems revealed several functional and physical differences that may explain these findings. CONCLUSION: Participants trained with simulated limbs performed better than those trained with a VR simulator; however, the metrics recorded by the VR system may address some aspects of performance that could eventually prove beneficial. APPLICATION: The present study highlights the potential for medical simulators to improve patient safety by enabling trainees to practice procedures on devices instead of patients. Applications of this research include training, performance assessment, and design of simulator systems. 相似文献
3.
Virtual Reality - Recent studies have indicated that facial electromyogram (fEMG)-based facial-expression recognition (FER) systems are promising alternatives to the conventional camera-based FER... 相似文献
4.
5.
Motion recognition is a topic in software engineering and dialect innovation with a goal of interpreting human signals through mathematical algorithm. Hand gesture is a strategy for nonverbal communication for individuals as it expresses more liberally than body parts. Hand gesture acknowledgment has more prominent significance in planning a proficient human computer interaction framework, utilizing signals as a characteristic interface favorable to circumstance of movements. Regardless, the distinguishing proof and acknowledgment of posture, gait, proxemics and human behaviors is furthermore the subject of motion to appreciate human nonverbal communication, thus building a richer bridge between machines and humans than primitive text user interfaces or even graphical user interfaces, which still limits the majority of input to electronics gadget. In this paper, a study on various motion recognition methodologies is given specific accentuation on available motions. A survey on hand posture and gesture is clarified with a detailed comparative analysis of hidden Markov model approach with other classifier techniques. Difficulties and future investigation bearing are also examined. 相似文献
6.
程鹏翔 《计算机测量与控制》2020,28(12):238-242
目前提出的弹药检测虚拟训练系统接收检测信号接收率较低,检测虚拟训练成功率较低。基于3D虚拟现实技术设计一种新的弹药检测虚拟训练系统,硬件由主控操作机、检测总线接口以及资源测试器组成,软件部分在3D虚拟现实技术下构建虚拟空间,在开放式软件结构构建的基础上进行检测指标划分,对用户界面的文件进行整改,按照弹药检测的指标数据提升整改的方向正确性,选用自检公式对检测的信息进行系统自检,依据检测到的问题的发生形式判断下一次问题发生的触发机制,由此避免下一次问题的产生,从而实现软件流程。实验结果表明,基于3D虚拟现实技术的弹药检测虚拟训练系统能有效提高信号接收率,增强虚拟训练接收成功率,具有较强的应用性。 相似文献
7.
Earthquake emergencies require a variety of behavioral responses in order to ensure the safety of occupants, which is different from simply exiting a building in fire emergencies. This makes it more complex to train building occupants in order to acquire skills that align to best practices for immediate earthquake response and post-earthquake evacuation. In recent years, Immersive Virtual Reality (IVR) and Serious Games (SGs) have become popular as training tools for earthquake emergencies. IVR SGs have been introduced to train individuals for specific building layouts or settings with fixed training objectives. However, the lack of flexibility in existing IVR SGs makes it challenging to have widespread uptake as trainees require different training objectives, pedagogical strategies, context, and content. As a result, the effectiveness of IVR SGs training is jeopardized if the customization ability is limited. To overcome this limitation, this paper presents a customization framework for IVR SGs suited to earthquake emergency training, using the concept of adaptive game-based learning. Trainees can receive training in context by customizing virtual environments, storylines, and teaching methods. A case study was undertaken to validate the proposed framework. Results showed the potential to carry out the customization process with ease, to generate a customized training experience, and to deliver the customized training for optimum learning. 相似文献
8.
Te-Yung Fang Pa-Chun Wang Chih-Hsien Liu Mu-Chun Su Shih-Ching Yeh 《Computer methods and programs in biomedicine》2014
Introduction
Virtual reality simulation training may improve knowledge of anatomy and surgical skills. We evaluated a 3-dimensional, haptic, virtual reality temporal bone simulator for dissection training.Methods
The subjects were 7 otolaryngology residents (3 training sessions each) and 7 medical students (1 training session each). The virtual reality temporal bone simulation station included a computer with software that was linked to a force-feedback hand stylus, and the system recorded performance and collisions with vital anatomic structures. Subjects performed virtual reality dissections and completed questionnaires after the training sessions.Results
Residents and students had favorable responses to most questions of the technology acceptance model (TAM) questionnaire. The average TAM scores were above neutral for residents and medical students in all domains, and the average TAM score for residents was significantly higher for the usefulness domain and lower for the playful domain than students. The average satisfaction questionnaire for residents showed that residents had greater overall satisfaction with cadaver temporal bone dissection training than training with the virtual reality simulator or plastic temporal bone. For medical students, the average comprehension score was significantly increased from before to after training for all anatomic structures. Medical students had significantly more collisions with the dura than residents. The residents had similar mean performance scores after the first and third training sessions for all dissection procedures.Discussion
The virtual reality temporal bone simulator provided satisfactory training for otolaryngology residents and medical students. 相似文献9.
Hazourli Ahmed Rachid Djeghri Amine Salam Hanan Othmani Alice 《Multimedia Tools and Applications》2021,80(9):13639-13662
Multimedia Tools and Applications - In this paper, an approach for Facial Expressions Recognition (FER) based on a multi-facial patches (MFP) aggregation network is proposed. Deep features are... 相似文献
10.
Fast-gesture recognition and classification using Kinect: an application for a virtual reality drumkit 总被引:1,自引:0,他引:1
Alejandro Rosa-Pujazón Isabel Barbancho Lorenzo J. Tardón Ana M. Barbancho 《Multimedia Tools and Applications》2016,75(14):8137-8164
In this paper, we present a system for the detection of fast gestural motion by using a linear predictor of hand movements. We also use the proposed detection scheme for the implementation of a virtual drumkit simulator. A database of drum-hitting motions is gathered and two different sets of features are proposed to discriminate different drum-hitting gestures. The two feature sets are related to observations of different nature: the trajectory of the hand and the pose of the arm. These two sets are used to train classifier models using a variety of machine learning techniques in order to analyse which features and machine learning techniques are more suitable for our classification task. Finally, the system has been validated by means of the Kinect application implemented and the participation of 12 different subjects for the experimental performance evaluation. Results showed a successful discrimination rate higher than 95 % for six different gestures per hand and good user experience. 相似文献
11.
12.
ABSTRACTAutomated detection and recognition of faces have been implemented in a broad range of media environments. Following that development, what concerns us in this article is the analysis of emotions from facial expressions using computer-based systems, in relation to which we critically investigate the use of theories of basic emotions. We explore in depth the company Affectiva’s attempts to translate, represent and schematize human emotions, as they raise a variety of problems and issues of uncertainty. We analyse the uncertainties concerning the processing of the human face ‘as image’ due to issues concerning temporality and static images as well as polyphony and modulations of the spectrum of expressions. One of our key arguments concerns the temporal character of human emotions, and we address how algorithmically regulated protocols of discretization may be said to prompt specific patterns of emotional responses and expressions based on an ideal of eliminating uncertainty. Through discussions via art pieces by Lauren McCarthy and Kyle McDonald, we question what happens when the protocols of computer systems start to perform aspects of emotional labour for us, making judgments by predicting adequate emotional responses based on the use of the strict metrics criticized in the article. 相似文献
13.
Evaluation of a low-cost 3D sound system for immersive virtual reality training systems 总被引:2,自引:0,他引:2
Doerr KU Rademacher H Huesgen S Kubbat W 《IEEE transactions on visualization and computer graphics》2007,13(2):204-212
Since head mounted displays (HMD), datagloves, tracking systems, and powerful computer graphics resources are nowadays in an affordable price range, the usage of PC-based "virtual training systems" becomes very attractive. However, due to the limited field of view of HMD devices, additional modalities have to be provided to benefit from 3D environments. A 3D sound simulation can improve the capabilities of VR systems dramatically. Unfortunately, realistic 3D sound simulations are expensive and demand a tremendous amount of computational power to calculate reverberation, occlusion, and obstruction effects. To use 3D sound in a PC-based training system as a way to direct and guide trainees to observe specific events in 3D space, a cheaper alternative has to be provided, so that a broader range of applications can take advantage of this modality. To address this issue, we focus in this paper on the evaluation of a low-cost 3D sound simulation that is capable of providing traceable 3D sound events. We describe our experimental system setup using conventional stereo headsets in combination with a tracked HMD device and present our results with regard to precision, speed, and used signal types for localizing simulated sound events in a virtual training environment 相似文献
14.
Bürger D. Pastel S. Chen C.-H. Petri K. Schmitz M. Wischerath L. Witte K. 《Virtual Reality》2023,27(3):1751-1764
Virtual Reality - Previous studies showed similar spatial orientation ability (SO) between real world (RW) and virtual reality (VR). As the SO deteriorates with age, it is crucial to investigate... 相似文献
15.
Ricci Fabiana Sofia Boldini Alain Beheshti Mahya Rizzo John-Ross Porfiri Maurizio 《Virtual Reality》2023,27(2):797-814
Virtual Reality - Blindness and low vision are an urgent, steadily increasing public health concern. One of the most dramatic consequences of the debilitating conditions that cause visual... 相似文献
16.
New algorithm for 3D facial model reconstruction and its application in virtual reality 总被引:2,自引:0,他引:2 下载免费PDF全文
Rong-HuaLiang Zhi-GengPan ChunChen 《计算机科学技术学报》2004,19(4):0-0
3D human face model reconstruction is essential to the generation of facial animations that is widely used in the field of virtual reality (VR). The main issues of 3D facial model reconstruction based on images by vision technologies are in twofold: one is to select and match the corresponding features of face from two images with minimal interaction and the other is to generate the realistic-looking human face model. In this paper, a new algorithm for realistic-looking face reconstruction is presented based on stereo vision. Firstly, a pattern is printed and attached to a planar surface for camera calibration, and corners generation and corners matching between two images are performed by integrating modified image pyramid Lucas-Kanade (PLK) algorithm and local adjustment algorithm, and then 3D coordinates of corners are obtained by 3D reconstruction. Individual face model is generated by the deformation of general 3D model and interpolation of the features. Finally, realistic-looking human face model 相似文献
17.
Bj?rn Schuller Zixing Zhang Felix Weninger Felix Burkhardt 《International Journal of Speech Technology》2012,15(3):313-323
Recognizing speakers in emotional conditions remains a challenging issue, since speaker states such as emotion affect the acoustic parameters used in typical speaker recognition systems. Thus, it is believed that knowledge of the current speaker emotion can improve speaker recognition in real life conditions. Conversely, speech emotion recognition still has to overcome several barriers before it can be employed in realistic situations, as is already the case with speech and speaker recognition. One of these barriers is the lack of suitable training data, both in quantity and quality—especially data that allow recognizers to generalize across application scenarios (‘cross-corpus’ setting). In previous work, we have shown that in principle, the usage of synthesized emotional speech for model training can be beneficial for recognition of human emotions from speech. In this study, we aim at consolidating these first results in a large-scale cross-corpus evaluation on eight of most frequently used human emotional speech corpora, namely ABC, AVIC, DES, EMO-DB, eNTERFACE, SAL, SUSAS and VAM, covering natural, induced and acted emotion as well as a variety of application scenarios and acoustic conditions. Synthesized speech is evaluated standalone as well as in joint training with human speech. Our results show that the usage of synthesized emotional speech in acoustic model training can significantly improve recognition of arousal from human speech in the challenging cross-corpus setting. 相似文献
18.
Virtual Reality - Speech recognition technology is a promising hands-free interfacing modality for virtual reality (VR) applications. However, it has several drawbacks, such as limited usability in... 相似文献
19.
20.
This paper presents two novel directional patterns, a Maximum Response-based Directional Texture Pattern (MRDTP) and a Maximum Response-based Directional Number Pattern (MRDNP), for recognizing the facial emotions in constrained as well as unconstrained situations. The intensity information obtained from the maximum of the edge responses, after applying eight Kirsch masks, is used for the calculation of facial features in MRDTP. In MRDNP, instead of intensity information, the direction number of the maximum response is used. After dividing MRDNP and MRDTP code images into grids, feature vectors are created from the concatenated histograms obtained from the grids. This paper also proposes an effective Generalized Supervised Dimension Reduction System (GSDRS) and uses Extreme Learning Machine with Radial Basis Function (ELM-RBF) classifier for rapid and efficient classification of emotions. Both the proposed patterns are more effective than the existing ones in removing random noise and providing good structural information using prominent edges which help to achieve high classification accuracy when tested with seven datasets. 相似文献