首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
 针对大量不同成像条件下获得的多视图像,研究利用局部不变特征及其空间布局约束构建用于非合作目标识别的类属超图模型的方法.该方法首先将每一幅图像表示为使用选定的稳健SIFT特征构成的属性图,然后提出了一种属性图相似性传播聚类原理.在给定的F度量的约束下,利用该原理进行聚类,并根据熵函数最小化优化条件,可迭代得到特定目标属性图样本集合的最优聚类,进一步将所获得的聚类简化成以非冗余属性图作为节点的类属超图模型.本文用大量图像样本进行了试验测试.实验结果验证了模型的可扩展性和识别性能.  相似文献   

2.
符号化时间序列聚类是聚类研究中的热点之一,其中关键问题是时间序列符号化相似度问题.本文针对传统的基于欧式距离度量存在的缺陷,以LCS度量为基础,提出了ELCS相似性度量,克服了LCS度量需要依赖线性函数选取的不足.在两类数据集上进行的实验表明,同其他常用度量的比较,该度量有着更好的聚类效果.  相似文献   

3.
遥感影像数据挖掘是一个有着广阔应用前景的研究领域。对图像检索、图像分类、图像聚类、空间关联规则挖掘和图像变化检测等数据挖掘应用而言,相似性度量是基础和前提。采用了图像空间划分的策略,在此基础上计算颜色、纹理和形状等3方面的低层视觉特征来描述图像,采用多维特征空间的网格划分来降低数据维数并建立了影像的相似性度量。实验结果表明,该方法对影像具有一定的几何和光照不变性。  相似文献   

4.
提出了一种利用显著兴趣点进行图像检索的新方法,此方法主要有显著兴趣点检测、基于显著兴趣点的特征描述和相似性度量三个步骤.先使用一个自适应滤波器对图像进行滤波,然后提取显著兴趣点;以显著兴趣点为线索,设计了一种基于显著兴趣点的颜色分布熵,既利用了显著兴趣点的局部特征,又考虑了显著兴趣点的空间分布结构;用图像间的颜色分布熵来度量图像间的相似性.该检索算法不但保证了对图像旋转、平移鲁棒性,而且克服了传统直方图没有空间位置的缺陷.实验结果表明,该方法对图像检索非常有效.  相似文献   

5.
张桂杰  张健沛  杨静  辛宇 《电子学报》2015,43(7):1329-1335
社区结构是社会网络最普遍和重要的拓扑属性之一,提出一种基于链接相似性聚类的重叠社区识别算法.该算法首先根据相邻链接的度分布状态,提出链接间的相似性度量方法;其次以链接相似性矩阵为输入,以链接社区的最优划分为目标,建立链接局部相似性聚类算法,实现了重叠社区的有效识别;然后对链接社区进行优化,解决了可能出现的过度重叠及孤立社区问题;最后在真实网络及人工合成网络上的实验验证了算法的高效性.  相似文献   

6.
《信息技术》2019,(8):33-36
为精确提取超声肿瘤图像的肿瘤区域,基于分裂合并思想,结合自协调谱聚类方法,提出一种基于超声图像的自适应谱聚类方法。该方法在分裂阶段用SLIC算法将图像分割成超像素,根据肿瘤区域和背景区域的纹理特征差异选取合适的纹理特征,在合并阶段用自协调谱聚类算法自动确定谱聚类数目,聚出肿瘤区域,并用先验知识提取分割结果中的肿瘤区域。将该算法提取的肿瘤区域和人工分割的肿瘤区域比较,准确度达到93. 41%,结果比较准确。  相似文献   

7.
密度敏感的谱聚类   总被引:13,自引:2,他引:13       下载免费PDF全文
王玲  薄列峰  焦李成 《电子学报》2007,35(8):1577-1581
谱聚类是近来出现的一种性能极具竞争力的聚类方法,它的成功很大程度依赖于相似性度量的选择.本文通过分析这一性质并结合数据聚类特性,提出一种数据依赖的相似性度量--密度敏感的相似性度量.该相似性度量可以有效描述数据的实际聚类分布.将其引入谱聚类得到密度敏感的谱聚类算法.与原有的谱聚类算法相比,新算法不仅能够处理多尺度聚类问题,而且对参数选择相对不敏感.算法有效性分析以及实验验证了所提算法的有效性和可行性.  相似文献   

8.
一种辐射源多特征数据关联的新方法   总被引:1,自引:1,他引:0  
万洪容  陈怀新 《电讯技术》2004,44(2):137-140
提出了一种辐射源多特征数据关联的新方法,对数据关联的相似性度量进行了讨论,给出了一种新的多特征参量统计相似性度量,并用谱系聚类法作数据关联,对所得关联结果进行了统计检验,计算机仿真结果证实了本方法的有效性。  相似文献   

9.
为了准确实现目标识别,从红外图像的特点出发,提出了将L_1空间度量的二型(Type-2)熵模糊聚类算法应用干红外图像分割.该算法首先通过L_1空间度量样本点与类别中最大最小值的距离,代替了传统聚类算法中样本点与聚类中心的聚类,然后根据熵模糊聚类算法获得上模糊隶属度和下模糊隶属度两个隶属度函数,并采用二型模糊融合得到隶属度函数,其中给出了一种权重加权降型算法.通过对实际的红外图像分割表明,这种算法能准确地实现红外图像分割,自适应性强,鲁棒性好,能够在复杂背景下获得较为理想的分割效果.  相似文献   

10.
基于关键熵的双树复小波域盲图像水印算法   总被引:2,自引:2,他引:0  
设计了一种基于关键熵的盲数字图像水印算法.首先,使用尺度不变特征变换(SIFT)方法,从图像中提取特征点;其次,以特征点为中心构造局部不变圆形区域,并对其进行归一化处理;然后,选取大于图像平均熵的图像区域作为关键熵图像区域;最后,结合量化调制策略及双树复小波变换(DTCWT)技术,将水印嵌入到关键熵图像区域中.实验分析...  相似文献   

11.
Image segmentation using association rule features   总被引:4,自引:0,他引:4  
A new type of texture feature based on association rules is described. Association rules have been used in applications such as market basket analysis to capture relationships present among items in large data sets. It is shown that association rules can be adapted to capture frequently occurring local structures in images. The frequency of occurrence of these structures can be used to characterize texture. Methods for segmentation of textured images based on association rule features are described. Simulation results using images consisting of man made and natural textures show that association rule features perform well compared to other widely used texture features. Association rule features are used to detect cumulus cloud fields in GOES satellite images and are found to achieve higher accuracy than other statistical texture features for this problem.  相似文献   

12.
This paper presents a learning-based unified image retrieval framework to represent images in local visual and semantic concept-based feature spaces. In this framework, a visual concept vocabulary (codebook) is automatically constructed by utilizing self-organizing map (SOM) and statistical models are built for local semantic concepts using probabilistic multi-class support vector machine (SVM). Based on these constructions, the images are represented in correlation and spatial relationship-enhanced concept feature spaces by exploiting the topology preserving local neighborhood structure of the codebook, local concept correlation statistics, and spatial relationships in individual encoded images. Finally, the features are unified by a dynamically weighted linear combination of similarity matching scheme based on the relevance feedback information. The feature weights are calculated by considering both the precision and the rank order information of the top retrieved relevant images of each representation, which adapts itself to individual searches to produce effective results. The experimental results on a photographic database of natural scenes and a bio-medical database of different imaging modalities and body parts demonstrate the effectiveness of the proposed framework.  相似文献   

13.
Vector quantization with complexity costs   总被引:2,自引:0,他引:2  
Vector quantization is a data compression method by which a set of data points is encoded by a reduced set of reference vectors: the codebook. A vector quantization strategy is discussed that jointly optimizes distortion errors and the codebook complexity, thereby determining the size of the codebook. A maximum entropy estimation of the cost function yields an optimal number of reference vectors, their positions, and their assignment probabilities. The dependence of the codebook density on the data density for different complexity functions is investigated in the limit of asymptotic quantization levels. How different complexity measures influence the efficiency of vector quantizers is studied for the task of image compression. The wavelet coefficients of gray-level images are quantized, and the reconstruction error is measured. The approach establishes a unifying framework for different quantization methods like K-means clustering and its fuzzy version, entropy constrained vector quantization or topological feature maps, and competitive neural networks  相似文献   

14.
In this paper, we propose a novel scheme for efficient content-based medical image retrieval, formalized according to the PAtterns for Next generation DAtabase systems (PANDA) framework for pattern representation and management. The proposed scheme involves block-based low-level feature extraction from images followed by the clustering of the feature space to form higher-level, semantically meaningful patterns. The clustering of the feature space is realized by an expectation–maximization algorithm that uses an iterative approach to automatically determine the number of clusters. Then, the 2-component property of PANDA is exploited: the similarity between two clusters is estimated as a function of the similarity of both their structures and the measure components. Experiments were performed on a large set of reference radiographic images, using different kinds of features to encode the low-level image content. Through this experimentation, it is shown that the proposed scheme can be efficiently and effectively applied for medical image retrieval from large databases, providing unsupervised semantic interpretation of the results, which can be further extended by knowledge representation methodologies.   相似文献   

15.
Traditional image retrieval methods, make use of color, shape and texture features, are based on local image database. But in the condition of which much more images are available on the internet, so big an image database includes various types of image information. In this paper, we introduce an intellectualized image retrieval method based on internet, which can grasp images on Internet automatically using web crawler and build the feature vector in local host. The method involves three parts: the capture-node, the manage-node, and the calculate-node. The calculate-node has two functions: feature extract and similarity measurement. According to the results of our experiments, we found the proposed method is simple to realization and has higher processing speed and accuracy.  相似文献   

16.
针对全色图像云检测与雪检测的问题,文中提出了一种基于多种纹理特征的特征提取方法。首先,利用自适应的大津阈值分割算法提取云、雪区域。然后,通过分形维数、灰度共生矩阵、小波变换等方法提取云、雪区域的多种纹理特性。最后,利用径向基核函数的支持向量机(Support Vector Machine,SVM)分类器进行云雪自动检测。典型遥感数据的实验结果验证了本文算法的有效性。  相似文献   

17.
针对。感卫星图像的云检测,提出了基于最小化支持向量数分类器的云检测方案,解决传统分类器训练样本多、易陷入局部最优的问题。使用该分类器对QuickBird高分辨率。感图像进行云检测,检测正确率达99%以上。实验表明:在确定分类器内部结构参数过程中,与传统的交叉验证法相比,基于支持向量数的方法不仅能够准确预测分类器推广性能的变化趋势,从而确立最优化的参数组合,并且实现简单,大大减少了计算的复杂度。与传统的BP神经网络相比,该方法所需训练样本少,分类性能好。  相似文献   

18.
A novel hierarchical method for finding tracer clouds from weather satellite images is proposed. From the sequence of cloud images, different features such as mean, standard deviation, busyness, and entropy are extracted. Based on these features, clouds are segmented using the k-means clustering algorithm and considering the coldest cloud segment, potential regions for tracer clouds are identified. These regions are represented by a set of features. All such steps are repeated for images taken at three consecutive time instants. Then, simulated annealing is used to establish an association between cloud segments of successive image frames. In this way, several chains of associated cloud regions are found and are ranked using fuzzy reasoning. The method has been tested in several image sequences, and its results are validated by determining cloud motion vector from the associated chains of tracers.  相似文献   

19.
基于内容的图像检索的关键在于对图像进行特征提取和对特征进行多比特量化编码 。近年来,基于内容的图像检索使用低级可视化特征对图像进行描述,存在“语义鸿沟”问题;其次,传统量化编码使用随机生成的投影矩阵,该矩阵与特征数据无关,因此不能保证量化的精确度。针对目前存在的这些问题,本文结合深度学习思想与迭代量化思想,提出基于卷积神经网络VGG16和迭代量化(Iterative Quantization, ITQ)的图像检索方法。使用在公开数据集上预训练VGG16网络模型,提取基于深度学习的图像特征;使用ITQ方法对哈希哈函数进行训练,不断逼近特征与设定比特数的哈希码之间的量化误差最小值,实现量化误差的最小化;最后使用获得的哈希码进行图像检索。本文使用查全率、查准率和平均精度均值作为检索效果的评价指标,在Caltech256图像库上进行测试。实验结果表明,本文提出的算法在检索优于其他主流图像检索算法。   相似文献   

20.
基于红外卫星云图的台风中心自动定位方法研究   总被引:1,自引:0,他引:1  
李妍 《红外》2010,31(3):11-14
台风中心定位一般依靠气象专业人员采用人工手动方式进行,自动化程度不高,效率低。本文利用单幅红外台风云图,根据台风运动特征以及天气学诊断原理,建立了基于红外卫星云图的云运动矢量主方向提取方法;在此基础上,根据台风密闭云区近似为圆的几何形状,构建了以圆的几何特性为约束的台风中心自动定位最优目标函数,求得了解析解,实现了有眼和无眼台风的自动定位。用这种方法对2005年“海棠”台风多个时次的红外卫星云图进行了台风中心自动定位仿真。该方法的定位精度较高,可作为台风中心自动定位的良好技术手段。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号