共查询到20条相似文献,搜索用时 140 毫秒
1.
针对大量不同成像条件下获得的多视图像,研究利用局部不变特征及其空间布局约束构建用于非合作目标识别的类属超图模型的方法.该方法首先将每一幅图像表示为使用选定的稳健SIFT特征构成的属性图,然后提出了一种属性图相似性传播聚类原理.在给定的F度量的约束下,利用该原理进行聚类,并根据熵函数最小化优化条件,可迭代得到特定目标属性图样本集合的最优聚类,进一步将所获得的聚类简化成以非冗余属性图作为节点的类属超图模型.本文用大量图像样本进行了试验测试.实验结果验证了模型的可扩展性和识别性能. 相似文献
2.
符号化时间序列聚类是聚类研究中的热点之一,其中关键问题是时间序列符号化相似度问题.本文针对传统的基于欧式距离度量存在的缺陷,以LCS度量为基础,提出了ELCS相似性度量,克服了LCS度量需要依赖线性函数选取的不足.在两类数据集上进行的实验表明,同其他常用度量的比较,该度量有着更好的聚类效果. 相似文献
3.
4.
5.
6.
7.
8.
一种辐射源多特征数据关联的新方法 总被引:1,自引:1,他引:0
提出了一种辐射源多特征数据关联的新方法,对数据关联的相似性度量进行了讨论,给出了一种新的多特征参量统计相似性度量,并用谱系聚类法作数据关联,对所得关联结果进行了统计检验,计算机仿真结果证实了本方法的有效性。 相似文献
9.
为了准确实现目标识别,从红外图像的特点出发,提出了将L_1空间度量的二型(Type-2)熵模糊聚类算法应用干红外图像分割.该算法首先通过L_1空间度量样本点与类别中最大最小值的距离,代替了传统聚类算法中样本点与聚类中心的聚类,然后根据熵模糊聚类算法获得上模糊隶属度和下模糊隶属度两个隶属度函数,并采用二型模糊融合得到隶属度函数,其中给出了一种权重加权降型算法.通过对实际的红外图像分割表明,这种算法能准确地实现红外图像分割,自适应性强,鲁棒性好,能够在复杂背景下获得较为理想的分割效果. 相似文献
10.
基于关键熵的双树复小波域盲图像水印算法 总被引:2,自引:2,他引:0
设计了一种基于关键熵的盲数字图像水印算法.首先,使用尺度不变特征变换(SIFT)方法,从图像中提取特征点;其次,以特征点为中心构造局部不变圆形区域,并对其进行归一化处理;然后,选取大于图像平均熵的图像区域作为关键熵图像区域;最后,结合量化调制策略及双树复小波变换(DTCWT)技术,将水印嵌入到关键熵图像区域中.实验分析... 相似文献
11.
Image segmentation using association rule features 总被引:4,自引:0,他引:4
Rushing J.A. Ranganath H. Hinke T.H. Graves S.J. 《IEEE transactions on image processing》2002,11(5):558-567
A new type of texture feature based on association rules is described. Association rules have been used in applications such as market basket analysis to capture relationships present among items in large data sets. It is shown that association rules can be adapted to capture frequently occurring local structures in images. The frequency of occurrence of these structures can be used to characterize texture. Methods for segmentation of textured images based on association rule features are described. Simulation results using images consisting of man made and natural textures show that association rule features perform well compared to other widely used texture features. Association rule features are used to detect cumulus cloud fields in GOES satellite images and are found to achieve higher accuracy than other statistical texture features for this problem. 相似文献
12.
Md. Mahmudur Rahman Prabir Bhattacharya Bipin C. Desai 《Journal of Visual Communication and Image Representation》2009,20(7):450-462
This paper presents a learning-based unified image retrieval framework to represent images in local visual and semantic concept-based feature spaces. In this framework, a visual concept vocabulary (codebook) is automatically constructed by utilizing self-organizing map (SOM) and statistical models are built for local semantic concepts using probabilistic multi-class support vector machine (SVM). Based on these constructions, the images are represented in correlation and spatial relationship-enhanced concept feature spaces by exploiting the topology preserving local neighborhood structure of the codebook, local concept correlation statistics, and spatial relationships in individual encoded images. Finally, the features are unified by a dynamically weighted linear combination of similarity matching scheme based on the relevance feedback information. The feature weights are calculated by considering both the precision and the rank order information of the top retrieved relevant images of each representation, which adapts itself to individual searches to produce effective results. The experimental results on a photographic database of natural scenes and a bio-medical database of different imaging modalities and body parts demonstrate the effectiveness of the proposed framework. 相似文献
13.
Vector quantization with complexity costs 总被引:2,自引:0,他引:2
Buhmann J. Kuhnel H. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1993,39(4):1133-1145
Vector quantization is a data compression method by which a set of data points is encoded by a reduced set of reference vectors: the codebook. A vector quantization strategy is discussed that jointly optimizes distortion errors and the codebook complexity, thereby determining the size of the codebook. A maximum entropy estimation of the cost function yields an optimal number of reference vectors, their positions, and their assignment probabilities. The dependence of the codebook density on the data density for different complexity functions is investigated in the limit of asymptotic quantization levels. How different complexity measures influence the efficiency of vector quantizers is studied for the task of image compression. The wavelet coefficients of gray-level images are quantized, and the reconstruction error is measured. The approach establishes a unifying framework for different quantization methods like K -means clustering and its fuzzy version, entropy constrained vector quantization or topological feature maps, and competitive neural networks 相似文献
14.
《IEEE transactions on information technology in biomedicine》2009,13(4):442-450
15.
Wenjin Li 《Wireless Personal Communications》2018,103(2):1153-1160
Traditional image retrieval methods, make use of color, shape and texture features, are based on local image database. But in the condition of which much more images are available on the internet, so big an image database includes various types of image information. In this paper, we introduce an intellectualized image retrieval method based on internet, which can grasp images on Internet automatically using web crawler and build the feature vector in local host. The method involves three parts: the capture-node, the manage-node, and the calculate-node. The calculate-node has two functions: feature extract and similarity measurement. According to the results of our experiments, we found the proposed method is simple to realization and has higher processing speed and accuracy. 相似文献
16.
针对全色图像云检测与雪检测的问题,文中提出了一种基于多种纹理特征的特征提取方法。首先,利用自适应的大津阈值分割算法提取云、雪区域。然后,通过分形维数、灰度共生矩阵、小波变换等方法提取云、雪区域的多种纹理特性。最后,利用径向基核函数的支持向量机(Support Vector Machine,SVM)分类器进行云雪自动检测。典型遥感数据的实验结果验证了本文算法的有效性。 相似文献
17.
针对。感卫星图像的云检测,提出了基于最小化支持向量数分类器的云检测方案,解决传统分类器训练样本多、易陷入局部最优的问题。使用该分类器对QuickBird高分辨率。感图像进行云检测,检测正确率达99%以上。实验表明:在确定分类器内部结构参数过程中,与传统的交叉验证法相比,基于支持向量数的方法不仅能够准确预测分类器推广性能的变化趋势,从而确立最优化的参数组合,并且实现简单,大大减少了计算的复杂度。与传统的BP神经网络相比,该方法所需训练样本少,分类性能好。 相似文献
18.
Mandal A.K. Pal S. De A.K. Mitra S. 《Geoscience and Remote Sensing, IEEE Transactions on》2005,43(4):813-818
A novel hierarchical method for finding tracer clouds from weather satellite images is proposed. From the sequence of cloud images, different features such as mean, standard deviation, busyness, and entropy are extracted. Based on these features, clouds are segmented using the k-means clustering algorithm and considering the coldest cloud segment, potential regions for tracer clouds are identified. These regions are represented by a set of features. All such steps are repeated for images taken at three consecutive time instants. Then, simulated annealing is used to establish an association between cloud segments of successive image frames. In this way, several chains of associated cloud regions are found and are ranked using fuzzy reasoning. The method has been tested in several image sequences, and its results are validated by determining cloud motion vector from the associated chains of tracers. 相似文献
19.
基于内容的图像检索的关键在于对图像进行特征提取和对特征进行多比特量化编码 。近年来,基于内容的图像检索使用低级可视化特征对图像进行描述,存在“语义鸿沟”问题;其次,传统量化编码使用随机生成的投影矩阵,该矩阵与特征数据无关,因此不能保证量化的精确度。针对目前存在的这些问题,本文结合深度学习思想与迭代量化思想,提出基于卷积神经网络VGG16和迭代量化(Iterative Quantization, ITQ)的图像检索方法。使用在公开数据集上预训练VGG16网络模型,提取基于深度学习的图像特征;使用ITQ方法对哈希哈函数进行训练,不断逼近特征与设定比特数的哈希码之间的量化误差最小值,实现量化误差的最小化;最后使用获得的哈希码进行图像检索。本文使用查全率、查准率和平均精度均值作为检索效果的评价指标,在Caltech256图像库上进行测试。实验结果表明,本文提出的算法在检索优于其他主流图像检索算法。 相似文献
20.
基于红外卫星云图的台风中心自动定位方法研究 总被引:1,自引:0,他引:1
台风中心定位一般依靠气象专业人员采用人工手动方式进行,自动化程度不高,效率低。本文利用单幅红外台风云图,根据台风运动特征以及天气学诊断原理,建立了基于红外卫星云图的云运动矢量主方向提取方法;在此基础上,根据台风密闭云区近似为圆的几何形状,构建了以圆的几何特性为约束的台风中心自动定位最优目标函数,求得了解析解,实现了有眼和无眼台风的自动定位。用这种方法对2005年“海棠”台风多个时次的红外卫星云图进行了台风中心自动定位仿真。该方法的定位精度较高,可作为台风中心自动定位的良好技术手段。 相似文献