首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
针对大多数视频问答(VideoQA)模型将视频和问题嵌入到同一空间进行答案推理所面临的多模态交互困难、视频语义特征保留能力差等问题,提出了一种视频描述机制来获得视频语义特征的文本表示,从而避免了多模态的交互.提出方法将视频特征通过描述机制得到相应的视频描述文本,并将描述文本特征与问题特征进行阅读理解式的交互与分析,最后推理出问题的答案.在MSVD-QA以及MSRVTT-QA数据集上的测试结果显示,提出问答模型的回答准确率较现有模型均有不同程度的提升,说明所提方法能更好地完成视频问答任务.  相似文献   

2.
Video question answering (Video QA) involves a thorough understanding of video content and question language, as well as the grounding of the textual semantic to the visual content of videos. Thus, to answer the questions more accurately, not only the semantic entity should be associated with certain visual instance in video frames, but also the action or event in the question should be localized to a corresponding temporal slot. It turns out to be a more challenging task that requires the ability of conducting reasoning with correlations between instances along temporal frames. In this paper, we propose an instance-sequence reasoning network for video question answering with instance grounding and temporal localization. In our model, both visual instances and textual representations are firstly embedded into graph nodes, which benefits the integration of intra- and inter-modality. Then, we propose graph causal convolution (GCC) on graph-structured sequence with a large receptive field to capture more causal connections, which is vital for visual grounding and instance-sequence reasoning. Finally, we evaluate our model on TVQA+ dataset, which contains the groundtruth of instance grounding and temporal localization, three other Video QA datasets and three multimodal language processing datasets. Extensive experiments demonstrate the effectiveness and generalization of the proposed method. Specifically, our method outperforms the state-of-the-art methods on these benchmarks.  相似文献   

3.
视频问答是深度学习领域的研究热点之一,广泛应用于安防和广告等系统中。在注意力机制框架下,建立先验MASK注意力机制模型,使用Faster R-CNN模型提取视频关键帧以及视频中的对象标签,将其与问题文本特征进行3种注意力加权,利用MASK屏蔽与问题无关的答案,从而增强模型的可解释性。实验结果表明,该模型在视频问答任务中的准确率达到61%,与VQA+、SA+等视频问答模型相比,其具有更快的预测速度以及更好的预测效果。  相似文献   

4.
Video summarization via exploring the global and local importance   总被引:1,自引:0,他引:1  
Video Summarization is to generate an important or interesting short video from a long video. It is important to reduce the time required to analyze the same archived video by removing unnecessary video data. This work proposes a novel method to generate dynamic video summarization by fusing the global importance and local importance based on multiple features and image quality. First, videos are split into several suitable video clips. Second, video frames are extracted from each video clip, and the center parts of frames are also extracted. Third, for each frame and the center part, the global importance and the local importance are calculated by using a set of features and image quality. Finally, the global importance and the local importance are fused to select an optimal subset for generating video summarization. Extensive experiments are conducted to demonstrate that the proposed method enables to generate high-quality video summarization.  相似文献   

5.
基于深度学习的视频超分辨率方法主要关注视频帧内和帧间的时空关系,但以往的方法在视频帧的特征对齐和融合方面存在运动信息估计不精确、特征融合不充分等问题。针对这些问题,采用反向投影原理并结合多种注意力机制和融合策略构建了一个基于注意力融合网络(AFN)的视频超分辨率模型。首先,在特征提取阶段,为了处理相邻帧和参考帧之间的多种运动,采用反向投影结构来获取运动信息的误差反馈;然后,使用时间、空间和通道注意力融合模块来进行多维度的特征挖掘和融合;最后,在重建阶段,将得到的高维特征经过卷积重建出高分辨率的视频帧。通过学习视频帧内和帧间特征的不同权重,充分挖掘了视频帧之间的相关关系,并利用迭代网络结构采取渐进的方式由粗到精地处理提取到的特征。在两个公开的基准数据集上的实验结果表明,AFN能够有效处理包含多种运动和遮挡的视频,与一些主流方法相比在量化指标上提升较大,如对于4倍重建任务,AFN产生的视频帧的峰值信噪比(PSNR)在Vid4数据集上比帧循环视频超分辨率网络(FRVSR)产生的视频帧的PSNR提高了13.2%,在SPMCS数据集上比动态上采样滤波视频超分辨率网络(VSR-DUF)产生的视频帧的PSNR提高了15.3%。  相似文献   

6.
Video text often contains highly useful semantic information that can contribute significantly to video retrieval and understanding. Video text can be classified into scene text and superimposed text. Most of the previous methods detect superimposed or scene text separately due to different text alignments. Moreover, because different language characters have different edge and texture features, it is very difficult to detect the multilingual text. In this paper, we first perform a detailed analysis of motion patterns of video text, and show that the superimposed and scene text exhibit different motion patterns on consecutive frames, which is insensitive to multiple language characters and multiple text alignments. Based on our analysis, we define Motion Perception Field (MPF) to represent the text motion patterns. Finally, we propose a text detection algorithms using MPF for both superimposed and scene text with multiple languages and multiple alignments. Experimental results on diverse videos demonstrate that our algorithms are robust, and outperform previous methods for detecting both superimposed and scene texts with multiple languages and multiple alignments.  相似文献   

7.
8.
目的 近年来,采用神经网络完成人像实时抠图已成为计算机视觉领域的研究热点,现有相关网络在处理高分辨率视频时还无法满足实时性要求,为此本文提出一种结合背景图的高分辨率视频人像实时抠图网络。方法 给出一种由基准网络和精细化网络构成的双层网络,在基准网络中,视频帧通过编码器模块提取图像的多尺度特征,采用金字塔池化模块融合这些特征作为循环解码器网络的输入;在循环解码器中,通过残差门控循环单元聚合连续视频帧间的时间信息,以此生成蒙版图、前景残差图和隐藏特征图,采用残差结构降低模型参数量并提高网络的实时性。为提高高分辨率图像实时抠图性能,在精细化网络中,设计高分辨率信息指导模块,通过高分辨率图像信息指导低分辨率图像的方式生成高质量人像抠图结果。结果 与近年来的相关网络模型进行实验对比,实验结果表明,本文方法在高分辨率数据集Human2K上优于现有相关方法,在评价指标(绝对误差、均方误差、梯度、连通性)上分别提升了18.8%、39.2%、40.7%、20.9%。在NVIDIA GTX 1080Ti GPU上处理4 K分辨率影像运行速率可达26帧/s,处理HD(high definition)分辨率影像运行速率可达43帧/s。结论 本文模型能够更好地完成高分辨率人像实时抠图任务,可以为影视、短视频社交以及网络会议等高级应用提供更好的支持。  相似文献   

9.
Video thumbnails enable users to see quick snapshots of video collections. To display the video thumbnails, the first frame or a frame selected by using simple low level features in each video clip has been set to the default thumbnail for the sake of computational efficiency and implementation simplicity. However, such methods often fail to represent the gist of the clip. To overcome this limitation, we present a new framework for both static and dynamic video thumbnail extraction. First, we formulate energy functions using the features which incorporate mid-level information to obtain superior thumbnailing. Since it is considered that frames whose layouts are similar to others in the clip are relevant in video thumbnail extraction, scene layouts are also considered in computing overall energy. For dynamic thumbnail generation, a time slot is determined by finding the duration showing the minimum energy. Experimental results show that the proposed method achieves comparable performance on a variety of challenging videos, and the subjective evaluation demonstrates the effectiveness of our method.  相似文献   

10.
11.
李群  肖甫  张子屹  张锋  李延超 《软件学报》2022,33(9):3195-3209
视频摘要生成是计算机视觉领域必不可少的关键任务,这一任务的目标是通过选择视频内容中信息最丰富的部分来生成一段简洁又完整的视频摘要,从而对视频内容进行总结.所生成的视频摘要通常为一组有代表性的视频帧(如视频关键帧)或按时间顺序将关键视频片段缝合所形成的一个较短的视频.虽然视频摘要生成方法的研究已经取得了相当大的进展,但现有的方法存在缺乏时序信息和特征表示不完备的问题,很容易影响视频摘要的正确性和完整性.为了解决视频摘要生成问题,本文提出一种空时变换网络模型,该模型包括三大模块,分别为:嵌入层、特征变换与融合层、输出层.其中,嵌入层可同时嵌入空间特征和时序特征,特征变换与融合层可实现多模态特征的变换和融合,最后输出层通过分段预测和关键镜头选择完成视频摘要的生成.通过空间特征和时序特征的分别嵌入,以弥补现有模型对时序信息表示的不足;通过多模态特征的变换和融合,以解决特征表示不完备的问题.我们在两个基准数据集上做了充分的实验和分析,验证了我们模型的有效性.  相似文献   

12.
刘璐    贾彩燕   《智能系统学报》2017,12(6):799-805
随着视频分享网站的兴起和快速发展,互联网上的视频数量呈爆炸式增长,对视频的组织及分类成为视频有效使用的基础。视频聚类技术由于只需要考虑视频数据内在的簇结构、不需要人工干预,越来越受到人们的青睐。现有的视频聚类方法有基于视频关键帧视觉相似性的方法、基于视频标题文本聚类的方法、文本和视觉多模态融合的方法。基于视频标题文本聚类的视频聚类方法由于其简便性与高效性而被企业界广泛使用,但视频标题由于其短文本的语义稀疏特性,聚类效果欠佳。为此,本文面向社会媒体视频,提出了一种社会媒体平台上视频相关多源文本融合的视频聚类方法,以克服由于视频标题的短文本带来的语义稀疏问题。不同文本聚类算法上的实验结果证明了多源文本数据融合方法的有效性。  相似文献   

13.
Query by video clip   总被引:15,自引:0,他引:15  
Typical digital video search is based on queries involving a single shot. We generalize this problem by allowing queries that involve a video clip (say, a 10-s video segment). We propose two schemes: (i) retrieval based on key frames follows the traditional approach of identifying shots, computing key frames from a video, and then extracting image features around the key frames. For each key frame in the query, a similarity value (using color, texture, and motion) is obtained with respect to the key frames in the database video. Consecutive key frames in the database video that are highly similar to the query key frames are then used to generate the set of retrieved video clips. (ii) In retrieval using sub-sampled frames, we uniformly sub-sample the query clip as well as the database video. Retrieval is based on matching color and texture features of the sub-sampled frames. Initial experiments on two video databases (basketball video with approximately 16,000 frames and a CNN news video with approximately 20,000 frames) show promising results. Additional experiments using segments from one basketball video as query and a different basketball video as the database show the effectiveness of feature representation and matching schemes.  相似文献   

14.
视频异常检测是指识别不符合预期行为的事件.当前许多方法利用重构误差来检测异常,由于深度神经网络的强大能力可能会重构出异常行为,这与异常行为重构误差较大的假设不符.而利用预测未来帧的方法进行异常检测取得了很好的效果,但这些方法大多未考虑正常样本的多样性,或不能建立视频连续帧之间的关联.为了解决该问题,提出了一种时序多尺度...  相似文献   

15.
16.
Video texts are closely related to the video content. The video text information can facilitate content based video analysis, indexing and retrieval. Video sequences are usually compressed before storage and transmission. A basic step of text-based applications is text detection and localization. In this paper, an overlaid text detection and localization method is proposed for H.264/AVC compressed videos by using the integer discrete cosine transform (DCT) coefficients of intra-frames. The main contributions of this paper are in the following two aspects: 1) coarse text blocks detection using block sizes and quantization parameters adaptive thresholds; 2) text line localization according to the characteristics of text in intra frames of H.264/AVC compressed domain. Comparisons are made with the pixel domain based text detection method for the H.264/AVC compressed video. Text detection results on five H.264/AVC video sequences under various qualities show the effectiveness of the proposed method.  相似文献   

17.
目的 深度伪造是新兴的一种使用深度学习手段对图像和视频进行篡改的技术,其中针对人脸视频进行的篡改对社会和个人有着巨大的威胁。目前,利用时序或多帧信息的检测方法仍处于初级研究阶段,同时现有工作往往忽视了从视频中提取帧的方式对检测的意义和效率的问题。针对人脸交换篡改视频提出了一个在多个关键帧中进行帧上特征提取与帧间交互的高效检测框架。方法 从视频流直接提取一定数量的关键帧,避免了帧间解码的过程;使用卷积神经网络将样本中单帧人脸图像映射到统一的特征空间;利用多层基于自注意力机制的编码单元与线性和非线性的变换,使得每帧特征能够聚合其他帧的信息进行学习与更新,并提取篡改帧图像在特征空间中的异常信息;使用额外的指示器聚合全局信息,作出最终的检测判决。结果 所提框架在FaceForensics++的3个人脸交换数据集上的检测准确率均达到96.79%以上;在Celeb-DF数据集的识别准确率达到了99.61%。在检测耗时上的对比实验也证实了使用关键帧作为样本对检测效率的提升以及本文所提检测框架的高效性。结论 本文所提出的针对人脸交换篡改视频的检测框架通过提取关键帧减少视频级检测中的计算成本和时间消耗,使用卷积神经网络将每帧的人脸图像映射到特征空间,并利用基于自注意力的帧间交互学习机制,使得每帧特征之间可以相互关注,学习到有判别性的信息,使得检测结果更加准确,整体检测过程更高效。  相似文献   

18.
There has been a vast augmentation in quantity of Video Content (VC) generated amid the last some years. The Video Summarization (VS) approach is introduced for managing the VC. Prevailing VS techniques have endeavored to render the VS but the systems have Execution Time (ET) as well as condensing the video's content in domain specific manner. To triumph over such disadvantages, this paper proposed efficient VS for surveillance system using normalized k-means along with quick sort method. The proposed technique comprises eight stages, like split video into frames, pre-sampling, provide ID number, feature extraction, Feature Selection (FS), clustering, extract frames, video summary. Initially, the video frames are pre-sampled utilizing the proposed Three Step Cross Searching Algorithm (TSCS) technique. Then, give the ID number for every frame. Next, the features are extracted as of the frames. Then, the necessary features are selected using Entropy based Spider Monkey Algorithms (ESMA). In next stage, the features are grouped using Normalized K-Means (N-Kmeans) algorithm for identifying best candidate frames. Then select the minimum distance value based cluster set is the Key Frame (KF) selection. At last, the video is orderly summarized using quick sort method. Finally, in experimental evaluation the proposed work is compared with the prevailing methods. The proposed VS gave better outcome than the existing approaches.  相似文献   

19.
An efficient video retrieval system is essential to search relevant video contents from a large set of video clips, which typically contain several heterogeneous video clips to match with. In this paper, we introduce a content-based video matching system that finds the most relevant video segments from video database for a given query video clip. Finding relevant video clips is not a trivial task, because objects in a video clip can constantly move over time. To perform this task efficiently, we propose a novel video matching called Spatio-Temporal Pyramid Matching (STPM). Considering features of objects in 2D space and time, STPM recursively divides a video clip into a 3D spatio-temporal pyramidal space and compares the features in different resolutions. In order to improve the retrieval performance, we consider both static and dynamic features of objects. We also provide a sufficient condition in which the matching can get the additional benefit from temporal information. The experimental results show that our STPM performs better than the other video matching methods.  相似文献   

20.
鲁棒的视频行为识别由于其复杂性成为了一项极具挑战的任务. 如何有效提取鲁棒的时空特征成为解决问题的关键. 在本文中, 提出使用双向长短时记忆单元(Bi--LSTM)作为主要框架去捕获视频序列的双向时空特征. 首先, 为了增强特征表达, 使用多层的卷积神经网络特征代替传统的手工特征. 多层卷积特征融合了低层形状信息和高层语义信息, 能够捕获丰富的空间信息. 然后, 将提取到的卷积特征输入Bi--LSTM, Bi--LSTM包含两个不同方向的LSTM层. 前向层从前向后捕获视频演变, 后向层反方向建模视频演变. 最后两个方向的演变表达融合到Softmax中, 得到最后的分类结果. 在UCF101和HMDB51数据集上的实验结果显示本文的方法在行为识别上可以取得较好的性能.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号