首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 671 毫秒
1.
多C多漂亮     
《通信技术》2006,(12):70-71
橙子中富含的维生素C是人见人爱的美丽元素,而索尼也给全新的C系列VAIO笔记本电脑装扮上了一身五彩新衣,处处散发着崇尚自由的青春气息。[编者按]  相似文献   

2.
《今日电子》2012,(4):65-66
ALT6181多模式、多频段功率放大器经过专门优化,可以在LTE、WCDMA和CDMA信号调制模式下实现优异的性能,支持1、5、6、18、19和26频段。  相似文献   

3.
4.
《世界广播电视》2009,(10):63-63
NTT展会的业界首款同时支持AVC/H.264 4:2:2HDTV解码功能与DVB—S2调制功能IRD的IRD HVD6100,在解码功能方面。支持AVC/H.264和MPEG-2的4:2:2/4:2:0格式,以及HDTV/SDTV等多种格式.并安装了HDTV/SDTV分辨率转换(下变换)等多种功能。在调制功能方面.既支持DVB—S2和DVB—DSNG、DVB—S等高性能的卫星素材传输方式,又可广泛采用各种复用、变调方式。  相似文献   

5.
多目标多传感器的跟踪是一个非常复杂的问题,其关键是数据的融合。单个传感器的多目标跟踪主要是解决数据的关联问题,多个传感器的多目标跟踪虽然有时也是解决数据关联问题,但是多传感器的数据关联就要复杂得多,它是一个多维的问题。文中主要介绍多目标多传感器跟踪网络构成策略及分级分布传感器跟踪的数据和轨迹融合方式。  相似文献   

6.
本文叙述微机应用中一个重要领域——多总线和多微机处理系统,包括它们的结构、类型定义、主要连接模块、信息传送和总线软件特点等。可供设计人员参考和借鉴。  相似文献   

7.
王和明 《电子技术》1992,19(8):35-36
所谓定时控制器就是按照预定的程序控制执行机构。完成一系列操作。它可广泛应用于工业自动化生产的某些操作过程。如化工厂反应罐的升温、降温、添料、冷却;电镀车间的镀件、取件、清洗等;也可用于学校和企事业单位作息的自动打铃以及各种家用电器的定时启闭等。根据不同的原理,采用不同的电路,选用不同的元器件,均可实现上述功能。本文介绍的时控器采用大规模集成电路EPROM完成译码和动态扫描显示,用CMOS RAM作为操作内容的存贮单元,故系统具有电路简单,操作方便,抗干扰能力强等特点,并真正做到了“多路”、“多点”。一、功能特点  相似文献   

8.
《电视技术》2007,31(11):45-45
可同时对模拟/数字视频和音频信号进行数字压缩处理,通过DVB标准的ASI及SPI接口与其他设备互联互通,编码器增加了前卫电路和内置时基校正电路,对信号源的要求大大降低。支持各种标准的视频和音频信号接口,包括模拟分量S—VIDEO、模拟复合视频以及单声道或模拟立体声等。  相似文献   

9.
使用超宽带(UWB)信号可以使雷达设计者解决雷达目标观测工作中的大多数重要问题。由于时间分辨率高,并且在大带宽范围内散射中心的频率相关性,因此使用超宽带可以获得更多的信息。雷达信号带宽的增加能够提供更为精确的距离测量,提高目标识别和跟踪能力,提高抗无源干扰能力和抗窄带电磁信号干扰的能力,因此提高了雷达的性能表现。目前,在通信领域,多输人多输出(MIMO)天线系统技术已经取得了诸多进展。这些分集式系统在显著提高通信系统性能方面已经显示出具有很大的潜力。  相似文献   

10.
多尺度多特征仿生人脸识别   总被引:1,自引:1,他引:0  
本文使用Daubechies正交小波变换对人脸图像进行二次小波分解:首先对第二次小波变换低频子图像进行PCA分析。运用邻域法进行分类得到距离隶属度。利用模糊分析提取出候选样本,对候选样本第一次小波变换的低频子图像进行PCA分析,运用最近邻域法进行分类得到最终识别结果。实验表明:小波变换预处理得到多尺度多特征;分类结果之间具有一定的互补性,同时可以提高分类性能。  相似文献   

11.
The detection of near-duplicate video clips (NDVCs) is an area of current research interest and intense development. Most NDVC detection methods represent video clips with a unique set of low-level visual features, typically describing color or texture information. However, low-level visual features are sensitive to transformations of the video content. Given the observation that transformations tend to preserve the semantic information conveyed by the video content, we propose a novel approach for identifying NDVCs, making use of both low-level visual features (this is, MPEG-7 visual features) and high-level semantic features (this is, 32 semantic concepts detected using trained classifiers). Experimental results obtained for the publicly available MUSCLE-VCD-2007 and TRECVID 2008 video sets show that bimodal fusion of visual and semantic features facilitates robust NDVC detection. In particular, the proposed method is able to identify NDVCs with a low missed detection rate (3% on average) and a low false alarm rate (2% on average). In addition, the combined use of visual and semantic features outperforms the separate use of either of them in terms of NDVC detection effectiveness. Further, we demonstrate that the effectiveness of the proposed method is on par with or better than the effectiveness of three state-of-the-art NDVC detection methods either making use of temporal ordinal measurement, features computed using the Scale-Invariant Feature Transform (SIFT), or bag-of-visual-words (BoVW). We also show that the influence of the effectiveness of semantic concept detection on the effectiveness of NDVC detection is limited, as long as the mean average precision (MAP) of the semantic concept detectors used is higher than 0.3. Finally, we illustrate that the computational complexity of our NDVC detection method is competitive with the computational complexity of the three aforementioned NDVC detection methods.  相似文献   

12.
Content-based image retrieval (CBIR) is a valuable computer vision technique which is increasingly being applied in the medical community for diagnosis support. However, traditional CBIR systems only deliver visual outputs, i.e., images having a similar appearance to the query, which is not directly interpretable by the physicians. Our objective is to provide a system for endomicroscopy video retrieval which delivers both visual and semantic outputs that are consistent with each other. In a previous study, we developed an adapted bag-of-visual-words method for endomicroscopy retrieval, called "Dense-Sift," that computes a visual signature for each video. In this paper, we present a novel approach to complement visual similarity learning with semantic knowledge extraction, in the field of in vivo endomicroscopy. We first leverage a semantic ground truth based on eight binary concepts, in order to transform these visual signatures into semantic signatures that reflect how much the presence of each semantic concept is expressed by the visual words describing the videos. Using cross-validation, we demonstrate that, in terms of semantic detection, our intuitive Fisher-based method transforming visual-word histograms into semantic estimations outperforms support vector machine (SVM) methods with statistical significance. In a second step, we propose to improve retrieval relevance by learning an adjusted similarity distance from a perceived similarity ground truth. As a result, our distance learning method allows to statistically improve the correlation with the perceived similarity. We also demonstrate that, in terms of perceived similarity, the recall performance of the semantic signatures is close to that of visual signatures and significantly better than those of several state-of-the-art CBIR methods. The semantic signatures are thus able to communicate high-level medical knowledge while being consistent with the low-level visual signatures and much shorter than them. In our resulting retrieval system, we decide to use visual signatures for perceived similarity learning and retrieval, and semantic signatures for the output of an additional information, expressed in the endoscopist own language, which provides a relevant semantic translation of the visual retrieval outputs.  相似文献   

13.
This paper presents a new learning algorithm for audiovisual fusion and demonstrates its application to video classification for film database. The proposed system utilized perceptual features for content characterization of movie clips. These features are extracted from different modalities and fused through a machine learning process. More specifically, in order to capture the spatio-temporal information, an adaptive video indexing is adopted to extract visual feature, and the statistical model based on Laplacian mixture are utilized to extract audio feature. These features are fused at the late fusion stage and input to a support vector machine (SVM) to learn semantic concepts from a given video database. Based on our experimental results, the proposed system implementing the SVM-based fusion technique achieves high classification accuracy when applied to a large volume database containing Hollywood movies.  相似文献   

14.
In this paper we describe a multi-strategy approach to improving semantic extraction from news video. Experiments show the value of careful parameter tuning, exploiting multiple feature sets and multilingual linguistic resources, applying text retrieval approaches for image features, and establishing synergy between multiple concepts through undirected graphical models. We present a discriminative learning framework called Multi-concept Discriminative Random Field (MDRF) for building probabilistic models of video semantic concept detectors by incorporating related concepts as well as the low-level observations. The model exploits the power of discriminative graphical models to simultaneously capture the associations of concept with observed data and the interactions between related concepts. Compared with previous methods, this model not only captures the co-occurrence between concepts but also incorporates the raw data observations into a unified framework. We also describe an approximate parameter estimation algorithm and present results obtained from the TRECVID 2006 data. No single approach, however, provides a consistently better result for all concept detection tasks, which suggests that extracting video semantics should exploit multiple resources and techniques rather than naively relying on a single approach  相似文献   

15.
Video semantic detection has been one research hotspot in the field of human-computer interaction. In video features-oriented sparse representation, the features from the same category video could not achieve similar coding results. To address this, the Locality-Sensitive Discriminant Sparse Representation (LSDSR) is developed, in order that the video samples belonging to the same video category are encoded as similar sparse codes which make them have better category discrimination. In the LSDSR, a discriminative loss function based on sparse coefficients is imposed on the locality-sensitive sparse representation, which makes the optimized dictionary for sparse representation be discriminative. The LSDSR for video features enhances the power of semantic discrimination to optimize the dictionary and build the better discriminant sparse model. More so, to further improve the accuracy of video semantic detection after sparse representation, a weighted K-Nearest Neighbor (KNN) classification method with the loss function that integrates reconstruction error and discrimination for the sparse representation is adopted to detect video semantic concepts. The proposed methods are evaluated on the related video databases in comparison with existing sparse representation methods. The experimental results show that the proposed methods significantly enhance the power of discrimination of video features, and consequently improve the accuracy of video semantic concept detection.  相似文献   

16.
Exploring context information for visual recognition has recently received significant research attention. This paper proposes a novel and highly efficient approach, which is named semantic diffusion, to utilize semantic context for large-scale image and video annotation. Starting from the initial annotation of a large number of semantic concepts (categories), obtained by either machine learning or manual tagging, the proposed approach refines the results using a graph diffusion technique, which recovers the consistency and smoothness of the annotations over a semantic graph. Different from the existing graph-based learning methods that model relations among data samples, the semantic graph captures context by treating the concepts as nodes and the concept affinities as the weights of edges. In particular, our approach is capable of simultaneously improving annotation accuracy and adapting the concept affinities to new test data. The adaptation provides a means to handle domain change between training and test data, which often occurs in practice. Extensive experiments are conducted to improve concept annotation results using Flickr images and TV program videos. Results show consistent and significant performance gain (10 +% on both image and video data sets). Source codes of the proposed algorithms are available online.  相似文献   

17.
随着大量视频的出现,视频内容检索是当今多媒体应用的一个重要研究方向。现有的视频检索技术多是基于低层特征,这些低层特征与高层语义概念相差较多,严重影响了视频内容检索系统的实用性。由于低层特征和高层语义概念间的语义鸿沟,如何从视频内容中提取人类思维中的语义概念,正成为目前视频内容检索中最具有挑战性的研究内容。文中介绍了语义视频检索出现的背景和国内外最新研究动态,分析了现有方法的优缺点,对现有的关键技术进行综述。  相似文献   

18.
基于对监控视频数据所含信息的层次化分析,提出了一种基于本体论(Ontology)的监控视频层次化描述方案.在此方案中,将监控视频采集器得到的数据分为视频元数据和视频数据分别加以描述.根据所提出的监控视频分析本体,将视频数据进一步分为视觉特征信息、目标对象语义信息、高层语义信息三个层次.最后借鉴领域知识,并使用可扩展标记语言(XML).以实例形式阐述各个层次的描述方法.  相似文献   

19.
基于视觉感知的图像检索的研究   总被引:2,自引:0,他引:2       下载免费PDF全文
张菁  沈兰荪 《电子学报》2008,36(3):494-499
基于内容图像检索的一个突出问题是图像低层特征与高层语义之间存在的巨大鸿沟.针对相关反馈和感兴趣区检测在弥补语义鸿沟时存在主观性强、耗时的缺点,提出了视觉信息是一种客观反映图像高层语义的新特征,基于视觉信息进行图像检索可以有效减小语义鸿沟;并在总结视觉感知的研究进展和实现方法的基础上,给出了基于视觉感知的图像检索在感兴趣区检测、图像分割、相关反馈和个性化检索四个方面的研究思路.  相似文献   

20.
The analysis of moving objects in videos, especially the recognition of human motions and gestures, is attracting increasing emphasis in computer vision area. However, most existing video analysis methods do not take into account the effect of video semantic information. The topological information of the video image plays an important role in describing the association relationship of the image content, which will help to improve the discriminability of the video feature expression. Based on the above considerations, we propose a video semantic feature learning method that integrates image topological sparse coding with dynamic time warping algorithm to improve the gesture recognition in videos. This method divides video feature learning into two phases: semi-supervised video image feature learning and supervised optimization of video sequence features. Next, a distance weighting based dynamic time warping algorithm and K-nearest neighbor algorithm is leveraged to recognize gestures. We conduct comparative experiments on table tennis video dataset. The experimental results show that the proposed method is more discriminative to the expression of video features and can effectively improve the recognition rate of gestures in sports video.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号