首页 | 官方网站   微博 | 高级检索  
     

基于视频分段的空时双通道卷积神经网络的行为识别
引用本文:王萍,庞文浩.基于视频分段的空时双通道卷积神经网络的行为识别[J].计算机应用,2019,39(7):2081-2086.
作者姓名:王萍  庞文浩
作者单位:西安交通大学电子与信息工程学院,西安,710049;西安交通大学电子与信息工程学院,西安,710049
基金项目:国家自然科学基金资助项目(61671365)。
摘    要:针对原始空时双通道卷积神经网络(CNN)模型对长时段复杂视频中行为识别率低的问题,提出了一种基于视频分段的空时双通道卷积神经网络的行为识别方法。首先将视频分成多个等长不重叠的分段,对每个分段随机采样得到代表视频静态特征的帧图像和代表运动特征的堆叠光流图像;然后将这两种图像分别输入到空域和时域卷积神经网络进行特征提取,再在两个通道分别融合各视频分段特征得到空域和时域的类别预测特征;最后集成双通道的预测特征得到视频行为识别结果。通过实验讨论了多种数据增强方法和迁移学习方案以解决训练样本不足导致的过拟合问题,分析了不同分段数、预训练网络、分段特征融合方案和双通道集成策略对行为识别性能的影响。实验结果显示所提模型在UCF101数据集上的行为识别准确率达到91.80%,比原始的双通道模型提高了3.8个百分点;同时在HMDB51数据集上的行为识别准确率也比原模型提高,达到61.39%,这表明所提模型能够更好地学习和表达长时段复杂视频中人体行为特征。

关 键 词:双通道卷积神经网络  行为识别  视频分段  迁移学习  特征融合
收稿时间:2019-01-22
修稿时间:2019-04-03

Two-stream CNN for action recognition based on video segmentation
WANG Ping,PANG Wenhao.Two-stream CNN for action recognition based on video segmentation[J].journal of Computer Applications,2019,39(7):2081-2086.
Authors:WANG Ping  PANG Wenhao
Affiliation:School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an Shaanxi 710049, China
Abstract:Aiming at the issue that original spatial-temporal two-stream Convolutional Neural Network (CNN) model has low accuracy for action recognition in long and complex videos, a two-stream CNN for action recognition based on video segmentation was proposed. Firstly, a video was split into multiple non-overlapping segments with same length. For each segment, one frame image was sampled randomly to represent its static features and stacked optical flow images were calculated to represent its motion features. Secondly, these two patterns of images were input into the spatial CNN and temporal CNN for feature extraction, respectively. And the classification prediction features of spatial and temporal domains for action recognition were obtained by merging all segment features in two streams respectively. Finally, the two-steam predictive features were integrated to obtain the action recognition results for the video. In series of experiments, some data augmentation techniques and transfer learning methods were discussed to solve the problem of over-fitting caused by the lack of training samples. The effects of various factors including the number of segments, network architectures, feature fusion schemes based on segmentation and two-stream integration strategy on the performance of action recognition were analyzed. The experimental results show that the accuracy of action recognition of the proposed model on dataset UCF101 reaches 91.80%, which is 3.8% higher than that of original two-stream CNN model; and the accuracy of the proposed model on dataset HMDB51 is improved to 61.39%, which is higher than that of the original model. It shows that the proposed model can better learn and express the action features in long and complex videos.
Keywords:two-stream Convolutional Neural Network (CNN)                                                                                                                        action recognition                                                                                                                        video segmentation                                                                                                                        transfer learning                                                                                                                        feature fusion
本文献已被 维普 万方数据 等数据库收录!
点击此处可从《计算机应用》浏览原始摘要信息
点击此处可从《计算机应用》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号