首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
提出一种可以识别和理解学习者的情感,并能针对学习者情感做出反应的智能计算机辅助教学系统ICAI(intelligent computer assisted instruction system)。系统结合表情识别和多Agent技术,依据人脸图像的分割信息,提出新的表情特征提取的方法,分别对眼睛、眉毛、嘴巴进行特征提取,将其应用于表情识别模块获得了很好的效果;同时,针对系统功能,引入多Agent技术实现各模块的主要功能。实验结果表明,该系统可以有效解决网络教学中情感缺乏的问题,具有良好的执行效率。  相似文献   

2.
设计了以Agent技术为核心的人性化的E-learning系统。以Agent技术和人工心理理论为基础,构建了ISM多层级结构化Learning-Map,在此基础上实现个性化的Learning-Map。在人性化研究方面,运用基于图像处理的人脸检测、表情识别技术实现情感的认知,并构建情感认知模型,通过智能Agent助手对认知的情感进行智能处理。  相似文献   

3.
基于BDI Agent技术的情感机器人语音识别技术研究   总被引:3,自引:0,他引:3  
介绍了BDI Agent技术在情感机器人实现中的应用。通过基于Agent技术的语音识别网络的构建,实现让机器人对外界语音输入能够正确地识别和理解,这样才能为进一步进行情感上的交流奠定基础。主要讨论了如何在BDI Agent技术的基础上建立识别器和解释器。  相似文献   

4.
具有情感交互功能的智能E-learning系统   总被引:2,自引:0,他引:2  
分析了网络教学中普遍存在的情感缺失问题.将模糊情感技术应用于网络教学,构建了以Agent为核心的智能E-learning系统,实现了个性化教学和学习者情感识别;系统以模糊教学为基础构建在线学习评价系统,利用智能Agent及时捕捉学生在线学习时的情感信息和学习状态,并根据学习者的不同学习状态和学习评价结果及时做出情绪反应.  相似文献   

5.
基于动态贝叶斯网络的听视觉融合情感识别   总被引:1,自引:0,他引:1  
在多媒体领域的研究中,对听视觉情感识别,如何融合听视觉情感信息是关键问题.传统的融合方法采用状态同步多流隐马尔可夫模型(Syn_AVHMM),但忽略了音视频情感信息之间的异步关系,从而影响识别结果.为了对听视觉情感信息之间的关联和异步关系进行更准确的描述,提出了一种听视觉状态可以异步,加入异步程度可控的多流动态贝叶斯网络情感识别模型(Asy_DBN),并在 eNERFACE'05 听视觉情感数据库上进行了情感识别实验.实验结果表明,通过调整听视觉状态流之间的异步约束,Asy-DBN 模型可以得到最好的识别结果,六种情感的平均识别率比马尔可夫模型高出 9.88%,为实际应用提供了依据.  相似文献   

6.
模糊情感分析及其在E-learning系统中的应用   总被引:1,自引:0,他引:1  
摘基于模糊数学,本文提出了用模糊数学的评价集方法对学习者进行综合评价,是模糊模式识别的一种新的扩展和应用,为拟人化和情感识别提供了一种新的研究思路。同时本文设计了一个具有情绪反应的智能Agent助理。根据以上提出的模糊情感识别,我们将模糊情感识别的结果应用于E-learning系统,从而实现了教学系统的人性化和智能化。  相似文献   

7.
人脸表情识别技术涉及情感计算、图像处理、机器视觉模式识别、生物特征识别等研究领域,是一个极富挑战性的交叉课题。该文介绍一种基于主元分析法(PCA)进行表情图像数据降维,利用支持向量机(SVM)进行分类的人脸表情识别技术。  相似文献   

8.
基于情感建模的教学辅助系统的研究*   总被引:5,自引:0,他引:5  
设计了一个以情感交互为核心的人性化的教学辅助系统.该系统综合了情感计算、计算机图形图像技术和移动Agent技术.系统以心理学和人工心理理论为基础构建了情感认知模型,定义了情绪空间、基本情绪和基本学习心理状态,并建立了基本情绪和基本学习心理状态与情绪空间的映射关系.通过情感认知模型对识别表情的处理,得到学生的学习心理状态及状态评价值.采用基于图像处理的人脸检测、表情识别和姿态识别方法编程实现了教学辅助系统.  相似文献   

9.
基于Agent技术的多源日志采集系统的设计与实现   总被引:2,自引:0,他引:2  
在分析Agent技术以及4种日志采集技术的基础上,本文提出了一个基于Agent技术的多源日志采集系统,来实现采集各种类型的系统日志。本文详细分析了系统的基本架构以及日志采集Agent的结构模型和工作流程。最后探讨了基于正则表达式的日志数据的识别与抽取以及日志XML格式化的实现技术。  相似文献   

10.
张勋  尹东 《计算机仿真》2006,23(4):144-146,162
遥感图像处理技术的发展无论对民用还是军用都有重要的意义。多Agent系统因其智能性和并行计算能力在各种领域都有了广泛的应用。针对遥感图像处理中对于自适应、快速识别目标的需要,提出将多Agent系统应用于遥感图像目标识别,设计并开发了一个分布式目标识别系统。它利用多个边缘检测算子Agent检测边缘,将检测结果序列送人神经网络Agent进行分类,并与纹理统计Agent的结果相融合,判决目标类别。实验结果表明,该系统对不同种类、不同分辨率的图像有一定的普适性,有较高的实用价值。对今后多Agent系统在图像处理中的研究和应用有很大的意义。  相似文献   

11.
Computer simulated avatars and humanoid robots have an increasingly prominent place in today's world. Acceptance of these synthetic characters depends on their ability to properly and recognizably convey basic emotion states to a user population. This study presents an analysis of the interaction between emotional audio (human voice) and video (simple animation) cues. The emotional relevance of the channels is analyzed with respect to their effect on human perception and through the study of the extracted audio-visual features that contribute most prominently to human perception. As a result of the unequal level of expressivity across the two channels, the audio was shown to bias the perception of the evaluators. However, even in the presence of a strong audio bias, the video data were shown to affect human perception. The feature sets extracted from emotionally matched audio-visual displays contained both audio and video features while feature sets resulting from emotionally mismatched audio-visual displays contained only audio information. This result indicates that observers integrate natural audio cues and synthetic video cues only when the information expressed is in congruence. It is therefore important to properly design the presentation of audio-visual cues as incorrect design may cause observers to ignore the information conveyed in one of the channels.  相似文献   

12.
This paper proposes an emotion recognition system using a deep learning approach from emotional Big Data. The Big Data comprises of speech and video. In the proposed system, a speech signal is first processed in the frequency domain to obtain a Mel-spectrogram, which can be treated as an image. Then this Mel-spectrogram is fed to a convolutional neural network (CNN). For video signals, some representative frames from a video segment are extracted and fed to the CNN. The outputs of the two CNNs are fused using two consecutive extreme learning machines (ELMs). The output of the fusion is given to a support vector machine (SVM) for final classification of the emotions. The proposed system is evaluated using two audio–visual emotional databases, one of which is Big Data. Experimental results confirm the effectiveness of the proposed system involving the CNNs and the ELMs.  相似文献   

13.
In this paper, real-time multimedia applications are integrated into a Web-based robotic telecare system. Using Java media framework (JMF), a video/audio conference system is developed to improve the interaction among caregivers, and switches remote robot manipulating privilege with the help of a centralized management server. This video/audio conference system is established on a similar profile of H.323 multipoint control unit (MCU). Access control is utilized to guarantee confidentiality of telecare users. Different telecare services are provided to meet requirements of various users. The whole telecare system is verified in a campus network.  相似文献   

14.
利用高级在轨系统实现高速同/异步混合复接   总被引:4,自引:0,他引:4  
空间数据传输系统中,音、视频数据的传输变得越来越重要.由于音、视频传输对等时性和实时性要求高,并且视频数据的速率高,数据量大,传统的做法是为音、视频信息单独开辟一个信道传输,但这样会耗费大量的资源.而将视频、音频和异步数据混合复接传输,可以提高信道利用率.针对这种任务,利用国际空间数据系统咨询委员会(CCSDS)高级在轨系统(AOS)建议,提出了两级复用的方案和虚拟信道调度的算法,以满足视频、音频数据传输的等时性和异步数据传输的灵活性,并设计了演示系统.  相似文献   

15.

This paper presents Scene2Wav, a novel deep convolutional model proposed to handle the task of music generation from emotionally annotated video. This is important because when paired with the appropriate audio, the resulting music video is able to enhance the emotional effect it has on viewers. The challenge lies in transforming the video to audio domain and generating music. Our proposed encoder Scene2Wav uses a convolutional sequence encoder to embed dynamic emotional visual features from low-level features in the colour space, namely Hue, Saturation and Value. The decoder Scene2Wav is a proposed conditional SampleRNN which uses that emotional visual feature embedding as condition to generate novel emotional music. The entire model is fine-tuned in an end-to-end training fashion to generate a music signal evoking the intended emotional response from the listener. By taking into consideration the emotional and generative aspect of it, this work is a significant contribution to the field of Human-Computer Interaction. It is also a stepping stone towards the creation of an AI movie and/or drama director, which is able to automatically generate appropriate music for trailers and movies. Based on experimental results, this model can effectively generate music that is preferred to the user when compared to the baseline model and able to evoke correct emotions.

  相似文献   

16.
A video remix is generally created by arranging selected video clips and combining them with other media streams such as audio clips and video transition effects. This paper proposes a system for semi-automatically creating video remixes of good expressive quality. Given multiple original video clips, audio clips, and transition effects as the input, the proposed system generates a video remix by five processes: I) video clip sequence generation, II) audio clip selection, III) audio boundary extraction, IV) video segment extraction, and V) transition effect selection, based on the spatial and temporal structural patterns automatically learned from professionally created video remix examples. Experiments using movie trailers of action genre as video remix examples not only demonstrate that video remixing by professionals can be imitated based on examples but also reveal that the video clip sequence generation and audio clip selection are the most important processes to improve the perceived expressive quality of video remixes.  相似文献   

17.
通过音频与视频解码器和相关的音视频流方案实现上位机与下位机音视频流数据的编码解码及重新合成,并给出了音视频在播放过程中实现同步校正的方法,实现了在有高音质外放设备情况下的上下位机播放的准确同步。系统在Android平台上已获得实现,所使用的音视频实时分离及同步播放方法能成功实现智能终端视频播放和高音质外放音频播放的准确同步,并在上位机成功合成新的音视频文件。  相似文献   

18.
音视频流和屏幕流的同步传输方法研究   总被引:1,自引:0,他引:1  
针对音视频流在网络上同步传输的难点问题,分析了基于Internet的网络教学中音视频流和计算机屏幕流传输的特点,提出了MPEG-4音视频流和屏幕流同步传输算法,采用仿真系统详细实现了一套采用实时传输协议的同步传输方案.通过开发的实际系统验证,有效地解决了将教师授课的音频和视频信息以及教师授课机的屏幕信息同步直播这一问题,达到了较好的使用效果.  相似文献   

19.
MPEG-4流媒体系统中的视音频同步   总被引:8,自引:0,他引:8  
流媒体系统中视音频同步是非常重要的。首先简要地介绍了MPEG 4的相关概念,分析了流媒体系统中影响视音频同步的因素,提出了一套相应的解决方案,最后,给出了一个基于MPEG 4的视频监控系统的实现。  相似文献   

20.
视频交互是基于Internet的远程教学系统设计和应用的一个难点。本文给出了一个在Web环境下实现视频交互的实用化模型,介绍了在此模型下的Web-IVS系统的工作原理、各个组成部分和具体的设计方法,并介绍了该系统在网络教学领域的实际运用。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号