首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
李爱兴  于峰崎 《电视技术》2011,35(21):112-116
介绍了一种基于H.264格式压缩的监控视频检索系统,该系统基于对象和语义,主要包括视频分析和视频检索两大模块.通过视频分析模块,对H.264压缩域视频解码并进行视频对象提取、分割、特征提取和对象匹配等分析过程,将得到的视频对象及特征信息存入至对象特征数据库;在视频检索模块端,用户可以通过输入语义信息或示例图片直接从特征数据库中进行视频检索查询,避免了视频的重复处理.实验表明,通过提出的系统可以快速有效地进行视频检索.  相似文献   

2.
张良  周长胜 《电子科技》2011,24(10):111-114
分析了视频数据与文本数据的差异,以及视频数据在视频分析检索方面存在的问题。从视频内容分析领域的研究热点出发,分别对视频语义库、与视频分析相关的视频低层特征、视频对象划分与识别、视频信息描述与编码等方面的技术进行了分析和对比。并提出了一个视频语义分析的框架和分析流程。  相似文献   

3.
基于语义的新闻视频检索系统设计   总被引:1,自引:1,他引:0  
分析了新闻视频语义的层次化模型,提出了基于语义的新闻视频检索系统结构,详细阐述了新闻视频语义分析与提取子系统各功能模块。提出了查询子系统的语义词典构建,最后实现了基于语义的新闻视频检索。  相似文献   

4.
邢玲  马强  胡金军 《电子学报》2016,44(10):2357-2363
针对视频内容管理在不同层面存在语义鸿沟的问题,提出基于UCL(Uniform Content Locater)的视频语义描述框架,该框架包含了三个层次的语义:内容语义、控制语义以及物理属性信息.而视频场景的分割则通过视频内容基于时空上的相似性实现.对于每个视频场景,结合局部纹理复杂度、背景亮度和场景复杂度,选择最佳参考帧(I帧)与非最佳参考帧(非I帧)以嵌入不同的语义信息:控制语义、物理属性信息嵌入I帧,内容语义嵌入非I帧.利用数字语义水印技术来实现视频内容的语义管理,完成语义信息和载体信号的一体传输和存储.实验中采用JM参考模型进行数字水印方法的验证,结果表明该方法鲁棒性强,且不会造成视频资源质量显著下降.  相似文献   

5.
基于全局运动信息的视频检索技术   总被引:17,自引:0,他引:17       下载免费PDF全文
俞天力  章毓晋 《电子学报》2001,29(Z1):1794-1798
运动信息是描述视频内容的一种重要信息,本文介绍了一个基于全局运动信息的视频检索系统.在该系统中,我们通过对视频数据进行短时全局运动分析,较为精确地提取出了描述全局摄像机运动的双线性运动模型,并以该模型参数为运动特征,以特征点序列顺序匹配以及全局运动矢量距离平方和为基础,构造了一个视频检索方案.实验结果表明,在特定的应用领域,如体育类视频中,我们的检索方案能够实现一定程度的语义内容检索,同时提供了采用其他图像特征所无法实现的检索功能.  相似文献   

6.
张起贵  陈瑜 《电视技术》2014,38(5):164-168,177
依托嵌入式技术和移动通信网络的发展,提出了一个基于3G网络和海思Hi3515的无线智能视频监控方案,完成了系统的总体设计,介绍了系统的组成结构和各单元的工作原理,并着重阐述了智能视频监控前端的硬件和软件设计。监控系统集成智能视频分析、智能控制等功能,可以在监控事件中只传输监控的有用信息,能够灵活地设定监控信息的接收方,并且可以根据用户的控制信息选择相应的传输方式。PC监控端能够根据前端的监控视频数据获取相应的电子地图和周边信息,并可实现多路监控。  相似文献   

7.
网络视频监控系统现状和发展趋势   总被引:2,自引:0,他引:2  
本文描述了视频监控系统发展历程、现状、系统架构、功能特点,并结合GSM网络提出移动视频监控系统的系统架构、结合IPV6技术提出基于IPV6的监控系统、同时介绍了视频监控在数字家庭网络中的应用趋势.  相似文献   

8.
基于因特网的分布式水利视频监控系统   总被引:4,自引:0,他引:4  
李吉庆 《电视技术》2004,(11):84-86
在第三代视频监控系统基础上提出一种基于因特网的远程水利视频监控系统,该系统能将分布于各地的水利监控系统通过网络整合为一个有层次的统一体,适合多用户多区域的水利监控.  相似文献   

9.
提出了基于EPON技术的视频监控系统建设方案,并与传统交换机接入的建设方案进行分析比较,结果表明基于EPON接入技术的视频监控系统建设方案在投资成本、扩容便利性以及光缆资源占用等方面具有较大优势.对电信运营商选择视频监控系统建设方案具有借鉴意义.  相似文献   

10.
介绍了一种基于多传感器信息融合技术的智能视频监控系统的信息融合设计.为了降低监控系统中的单一传感器的误报率和漏报率,通过对视频监控系统的人体红外、视频摄像等多种传感器的网络集成,将不同传感器采集的不同的描述信息进行了有效地特征提取和传输,并运用模糊神经网络的多传感器信息融合算法进行多次仿真实验.实现了该系统对检测区域的...  相似文献   

11.
董远  张纪伟  赵楠  常晓夫  刘巍 《中国通信》2012,9(8):105-121
The rapid growth of multimedia content necessitates powerful technologies to filter, classify, index and retrieve video documents more efficiently. However, the essential bottleneck of image and video analysis is the problem of semantic gap that low level features extracted by computers always fail to coincide with high-level concepts interpreted by humans. In this paper, we present a generic scheme for the detection video semantic concepts based on multiple visual features machine learning. Various global and local low-level visual features are systematically investigated, and kernel-based learning method equips the concept detection system to explore the potential of these features. Then we combine the different features and sub-systems on both classifier-level and kernel-level fusion that contribute to a more robust system. Our proposed system is tested on the TRECVID dataset. The resulted Mean Average Precision (MAP) score is much better than the benchmark performance, which proves that our concepts detection engine develops a generic model and performs well on both object and scene type concepts.  相似文献   

12.
13.
On the social Web, the amount of video content either originated from wireless devices or previously received from media servers has increased enormously in the recent years. The astounding growth of Web videos has stimulated researchers to propose new strategies to organize them into their respective categories. Because of complex ontology and large variation in content and quality of Web videos, it is difficult to get sufficient, precisely labeled training data, which causes hindrance in automatic video classification. In this paper, we propose a novel content‐ and context‐based Web video classification framework by rendering external support through category discriminative terms (CDTs) and semantic relatedness measure (SRM). Mainly, a three‐step framework is proposed. Firstly, content‐based video classification is proposed, where twofold use of high‐level concept detectors is leveraged to classify Web videos. Initially, category classifiers induced from VIREO‐374 detectors are trained to classify Web videos, and then concept detectors with high confidence for each video are mapped to CDT through SRM‐assisted semantic content fusion function to further boost the category classifiers, which intuitively provide a more robust measure for Web video classification. Secondly, a context‐based video classification is proposed, where twofold use of contextual information is also harnessed. Initially, cosine similarity and then semantic similarity are measured between text features of each video and CDT through vector space model (VSM)‐ and SRM‐assisted semantic context fusion function, respectively. Finally, classification results from content and context are fused to compensate for the shortcomings of each other, which enhance the video classification performance. Experiments on large‐scale video dataset validate the effectiveness of the proposed solution.  相似文献   

14.
The detection of near-duplicate video clips (NDVCs) is an area of current research interest and intense development. Most NDVC detection methods represent video clips with a unique set of low-level visual features, typically describing color or texture information. However, low-level visual features are sensitive to transformations of the video content. Given the observation that transformations tend to preserve the semantic information conveyed by the video content, we propose a novel approach for identifying NDVCs, making use of both low-level visual features (this is, MPEG-7 visual features) and high-level semantic features (this is, 32 semantic concepts detected using trained classifiers). Experimental results obtained for the publicly available MUSCLE-VCD-2007 and TRECVID 2008 video sets show that bimodal fusion of visual and semantic features facilitates robust NDVC detection. In particular, the proposed method is able to identify NDVCs with a low missed detection rate (3% on average) and a low false alarm rate (2% on average). In addition, the combined use of visual and semantic features outperforms the separate use of either of them in terms of NDVC detection effectiveness. Further, we demonstrate that the effectiveness of the proposed method is on par with or better than the effectiveness of three state-of-the-art NDVC detection methods either making use of temporal ordinal measurement, features computed using the Scale-Invariant Feature Transform (SIFT), or bag-of-visual-words (BoVW). We also show that the influence of the effectiveness of semantic concept detection on the effectiveness of NDVC detection is limited, as long as the mean average precision (MAP) of the semantic concept detectors used is higher than 0.3. Finally, we illustrate that the computational complexity of our NDVC detection method is competitive with the computational complexity of the three aforementioned NDVC detection methods.  相似文献   

15.
This article presents a framework for automatic semantic annotation of video streams with an ontology that includes concepts expressed using linguistic terms and visual data.  相似文献   

16.
Night video enhancement techniques are widely used for identifying suspicious activities captured by night visual surveillance systems. However, artificial light sources present in the surroundings deteriorate the visual quality of the video captured during night. This non-uniform illumination reduces the object identification and tracking capability of a real-time visual security system. Thus, a uniform enhancement technique is insufficient for handling such uneven illumination. In this paper, we propose a novel night video enhancement scheme based on a hierarchical self-organizing network. This proposed scheme automatically groups and enhances the neighboring pixels of dark and light regions in each frame. In this scheme, two-level self- organizing neural networks were hierarchically arranged to group similar pixels present in the night video frame. We applied the no-reference-based performance evaluation metrics for measuring the objective quality of the video. The experimental results showed that our proposed approach considerably enhances the visual perception of the video captured at night under varied illumination conditions.  相似文献   

17.
Semantic video analysis is a key issue in digital video applications, including video retrieval, annotation, and management. Most existing work on semantic video analysis is mainly focused on event detection for specific video genres, while the genre classification is treated as another independent issue. In this paper, we present a semantic framework for weakly supervised video genre classification and event analysis jointly by using probabilistic models for MPEG video streams. Several computable semantic features that can accurately reflect the event attributes are derived. Based on an intensive analysis on the connection between video genres and the contextual relationship among events, as well as the statistical characteristics of dominant event, a hidden Markov model (HMM) and naive Bayesian classifier (NBC) based analysis algorithm is proposed for video genre classification. Another Gaussian mixture model (GMM) is built to detect the contained events using the same semantic features, whilst an event adjustment strategy is proposed according to an analysis on the GMM structure and pre-definition of video events. Subsequently, a special event is recognized based on the detected events by another HMM. The simulative experiments on video genre classification and event analysis using a large number of video data sets demonstrate the promising performance of the proposed framework for semantic video analysis.  相似文献   

18.
一种基于Petri网的监控视频事件抽取方法   总被引:1,自引:0,他引:1  
代科学  李国辉 《电视技术》2006,(1):83-85,89
通过扩展Petri网定义,提出了一种监控视频事件时空关系和逻辑关系的描述方法,通过将语义级的查询事件映射成Petri网,再在Petri网推理过程中结合计算机视觉算法对场景运动目标行为的解释,实现了有关运动目标行为的事件抽取和相应监控视频片段的定位.  相似文献   

19.
A content-based image retrieval mechanism to support complex similarity queries is presented. The image content is defined by three kinds of features: quantifiable features describing the visual information, nonquantifiable features describing the semantic information, and keywords describing more abstract semantic information. In correspondence with these feature sets, we construct three types of indexes: visual indexes, semantic indexes, and keyword indexes. Index structures are elaborated to provide effective and efficient retrieval of images based on their contents. The underlying index structure used for all indexes is the HG-tree. In addition to the HG-tree, the signature file and hashing technique are also employed to index keywords and semantic features. The proposed indexing scheme combines and extends the HG-tree, the signature file, and the hashing scheme to support complex similarity queries. We also propose a new evaluation strategy to process the complex similarity queries. Experiments have been carried out on large image collections to demonstrate the effectiveness of the proposed retrieval mechanism.  相似文献   

20.
根据中国石油钻井井史管理系统的特点,针对钻井领域数据量大、关系复杂、分布、异构、自治等当前存在的问题,创新地提出了新一代井史异构数据集成方法"基于Ontology井史异构数据集成",该方法从语义集成的角度彻底消除数据源之间数据异构问题,实现各油田井史数据高效集成、共享。对方法的原理及实现步骤进行了详细阐述,建立了钻井领域本体库基础模型,实现了油田关系型井史数据库到钻井领域本体库之间的关联映射。实例分析证明,该方法在井史数据集成过程中高效可行,集成效果良好。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号