首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 24 毫秒
1.
郭一村  陈华辉 《计算机应用》2021,41(4):1106-1112
在当前大规模数据检索任务中,学习型哈希方法能够学习紧凑的二进制编码,在节省存储空间的同时能快速地计算海明空间内的相似度,因此近似最近邻检索常使用哈希的方式来完善快速最近邻检索机制。对于目前大多数哈希方法都采用离线学习模型进行批处理训练,在大规模流数据的环境下无法适应可能出现的数据变化而使得检索效率降低的问题,提出在线哈希方法并学习适应性的哈希函数,从而在输入数据的过程中连续学习,并且能实时地应用于相似性检索。首先,阐释了学习型哈希的基本原理和实现在线哈希的内在要求;接着,从在线条件下流数据的读取模式、学习模式以及模型更新模式等角度介绍在线哈希不同的学习方式;而后,将在线学习算法分为六类:基于主-被动算法、基于矩阵分解技术、基于无监督聚类、基于相似性监督、基于互信息度量和基于码本监督,并且分析这些算法的优缺点及特点;最后,总结和讨论了在线哈希的发展方向。  相似文献   

2.
Li  Yannuan  Wan  Lin  Fu  Ting  Hu  Weijun 《Multimedia Tools and Applications》2019,78(17):24431-24451

In this paper, we propose a novel hash code generation method based on convolutional neural network (CNN), called the piecewise supervised deep hashing (PSDH) method to directly use a latent layer data and the output layer result of the classification network to generate a two-segment hash code for every input image. The first part of the hash code is the class information hash code, and the second part is the feature message hash code. The method we proposed is a point-wise approach and it is easy to implement and works very well for image retrieval. In particular, it performs excellently in the search of pictures with similar features. The more similar the images are in terms of color and geometric information and so on, the better it will rank above the search results. Compared with the hashing method proposed so far, we keep the whole hashing code search method, and put forward a piecewise hashing code search method. Experiments on three public datasets demonstrate the superior performance of PSDH over several state-of-art methods.

  相似文献   

3.
Learning-based hashing methods are becoming the mainstream for approximate scalable multimedia retrieval. They consist of two main components: hash codes learning for training data and hash functions learning for new data points. Tremendous efforts have been devoted to designing novel methods for these two components, i.e., supervised and unsupervised methods for learning hash codes, and different models for inferring hashing functions. However, there is little work integrating supervised and unsupervised hash codes learning into a single framework. Moreover, the hash function learning component is usually based on hand-crafted visual features extracted from the training images. The performance of a content-based image retrieval system crucially depends on the feature representation and such hand-crafted visual features may degrade the accuracy of the hash functions. In this paper, we propose a semi-supervised deep learning hashing (DLH) method for fast multimedia retrieval. More specifically, in the first component, we utilize both visual and label information to learn an optimal similarity graph that can more precisely encode the relationship among training data, and then generate the hash codes based on the graph. In the second stage, we apply a deep convolutional network to simultaneously learn a good multimedia representation and a set of hash functions. Extensive experiments on five popular datasets demonstrate the superiority of our DLH over both supervised and unsupervised hashing methods.  相似文献   

4.
Abstract

State-of-the-art hashing methods, such as the kernelised locality-sensitive hashing and spectral hashing, have high algorithmic complexities to build the hash codes and tables. Our observation from the existing hashing method is that, putting two dissimilar data points into the same hash bucket only reduces the efficiency of the hash table, but it does not hurt the query accuracy. Whereas putting two similar data points into different hash buckets will reduce the correctness (i.e. query accuracy) of a hashing method. Therefore, it is much more important for a good hashing method to ensure that similar data points have high probabilities to be put to the same bucket, than considering those dissimilar data-point relations. On the other side, attracting similar data points to the same hash bucket will naturally suppress dissimilar data points to be put into the same hash bucket. With this locality-preserving observation, we naturally propose a new hashing method called the locality-preserving hashing, which builds the hash codes and tables with much lower algorithmic complexity. Experimental results show that the proposed method is very competitive in terms of the training time spent for large data-sets among the state of the arts, and with reasonable or even better query accuracy.  相似文献   

5.
针对现阶段深度跨模态哈希检索算法无法较好地检索训练数据类别以外的数据及松弛哈希码离散化约束造成的次优解等问题,提出自适应深度跨模态增量哈希检索算法,保持训练数据的哈希码不变,直接学习新类别数据的哈希码。同时,将哈希码映射到潜在子空间中保持多模态数据之间的相似性和非相似性,并提出离散约束保持的跨模态优化算法来求解最优哈希码。此外,针对目前深度哈希算法缺乏有效的复杂度评估方法,提出基于神经网络神经元更新操作的复杂度分析方法,比较深度哈希算法的复杂度。公共数据集上的实验结果显示,所提算法的训练时间低于对比算法,同时检索精度高于对比算法。  相似文献   

6.
目的 视觉检索需要准确、高效地从大型图像或者视频数据集中检索出最相关的视觉内容,但是由于数据集中图像数据量大、特征维度高的特点,现有方法很难同时保证快速的检索速度和较好的检索效果。方法 对于面向图像视频数据的高维数据视觉检索任务,提出加权语义局部敏感哈希算法(weighted semantic locality-sensitive hashing, WSLSH)。该算法利用两层视觉词典对参考特征空间进行二次空间划分,在每个子空间里使用加权语义局部敏感哈希对特征进行精确索引。其次,设计动态变长哈希码,在保证检索性能的基础上减少哈希表数量。此外,针对局部敏感哈希(locality sensitive hashing, LSH)的随机不稳定性,在LSH函数中加入反映参考特征空间语义的统计性数据,设计了一个简单投影语义哈希函数以确保算法检索性能的稳定性。结果 在Holidays、Oxford5k和DataSetB数据集上的实验表明,WSLSH在DataSetB上取得最短平均检索时间0.034 25 s;在编码长度为64位的情况下,WSLSH算法在3个数据集上的平均精确度均值(mean average precision,mAP)分别提高了1.2%32.6%、1.7%19.1%和2.6%28.6%,与几种较新的无监督哈希方法相比有一定的优势。结论 通过进行二次空间划分、对参考特征的哈希索引次数进行加权、动态使用变长哈希码以及提出简单投影语义哈希函数来对LSH算法进行改进。由此提出的加权语义局部敏感哈希(WSLSH)算法相比现有工作有更快的检索速度,同时,在长编码的情况下,取得了更为优异的性能。  相似文献   

7.
哈希表示能够节省存储空间,加快检索速度,所以基于哈希表示的跨模态检索已经引起广泛关注。多数有监督的跨模态哈希方法以一种回归或图约束的方式使哈希编码具有语义鉴别性,然而这种方式忽略了哈希函数的语义鉴别性,从而导致新样本不能获得语义保持的哈希编码,限制了检索准确率的提升。为了同时学习具有语义保持的哈希编码和哈希函数,提出一种语义保持哈希方法用于跨模态检索。通过引入两个不同模态的哈希函数,将不同模态空间的样本映射到共同的汉明空间。为使哈希编码和哈希函数均具有较好的语义鉴别性,引入了语义结构图,并结合局部结构保持的思想,将哈希编码和哈希函数的学习融合到同一个框架,使两者同时优化。三个多模态数据集上的大量实验证明了该方法在跨模态检索任务的有效性和优越性。  相似文献   

8.
目的 基于深度学习的图像哈希检索是图像检索领域的热点研究问题。现有的深度哈希方法忽略了深度图像特征在深度哈希函数训练中的指导作用,并且由于采用松弛优化,不能有效处理二进制量化误差较大导致的生成次优哈希码的问题。对此,提出一种自监督的深度离散哈希方法(self-supervised deep discrete hashing,SSDDH)。方法 利用卷积神经网络提取的深度特征矩阵和图像标签矩阵,计算得到二进制哈希码并作为自监督信息指导深度哈希函数的训练。构造成对损失函数,同时保持连续哈希码之间相似性以及连续哈希码与二进制哈希码之间的相似性,并利用离散优化算法求解得到哈希码,有效降低二进制量化误差。结果 将本文方法在3个公共数据集上进行测试,并与其他哈希算法进行实验对比。在CIFAR-10、NUS-WIDE(web image dataset from National University of Singapore)和Flickr数据集上,本文方法的检索精度均为最高,本文方法的准确率比次优算法DPSH(deep pairwise-supervised hashing)分别高3%、3%和1%。结论 本文提出的基于自监督的深度离散哈希的图像检索方法能有效利用深度特征信息和图像标签信息,并指导深度哈希函数的训练,且能有效减少二进制量化误差。实验结果表明,SSDDH在平均准确率上优于其他同类算法,可以有效完成图像检索任务。  相似文献   

9.
现有基于深度学习的哈希图像检索方法通常使用全连接作为哈希编码层,并行输出每一位哈希编码,这种方法将哈希编码都视为图像的信息编码,忽略了编码过程中哈希码各个比特位之间的关联性与整段编码的冗余性,导致网络编码性能受限.因此,本文基于编码校验的原理,提出了串行哈希编码的深度哈希方法——串行哈希编码网络(serial hashing network, SHNet).与传统的哈希编码方法不同, SHNet将哈希编码网络层结构设计为串行方式,在生成哈希码过程中对串行生成的前部分哈希编码进行校验,从而充分利用编码的关联性与冗余性生成信息量更为丰富、更加紧凑、判别力更强的哈希码.采用mAP作为检索性能评价标准,将本文所提方法与目前主流哈希方法进行比较,实验结果表明本文在不同哈希编码长度下的m AP值在3个数据集CIFAR-10、Image Net、NUS-WIDE上都优于目前主流深度哈希算法,证明了其有效性.  相似文献   

10.
哈希算法已被广泛应用于解决大规模图像检索的问题. 在已有的哈希算法中, 无监督哈希算法因为不需要数据库中图片的语义信息而被广泛应用. 平移不变核局部敏感哈希(SKLSH)算法就是一种较为代表性的无监督哈希算法.该算法随机的产生哈希函数, 并没有考虑所产生的哈希函数的具体检索效果. 因此, SKLSH算法可能产生一些检索效果表现较差的哈希函数. 在本文中, 提出了编码选择哈希算法(BSH). BSH算法根据SKLSH算法产生的哈希函数的具体检索效果来进行挑选. 挑选的标准主要根据哈希函数在3个方面的表现: 相似性符合度, 信息包含量, 和编码独立性. 然后,BSH算法还使用了一种基于贪心的选择方法来找到哈希函数的最优组合. BSH算法和其他代表性的哈希算法在两个真实图像库上进行了检索效果的对比实验. 实验结果表明, 相比于最初的SKLSH算法和其他哈希算法, BSH算法在检索准确度上有着明显的提高.  相似文献   

11.
Robust and secure image hashing   总被引:8,自引:0,他引:8  
Image hash functions find extensive applications in content authentication, database search, and watermarking. This paper develops a novel algorithm for generating an image hash based on Fourier transform features and controlled randomization. We formulate the robustness of image hashing as a hypothesis testing problem and evaluate the performance under various image processing operations. We show that the proposed hash function is resilient to content-preserving modifications, such as moderate geometric and filtering distortions. We introduce a general framework to study and evaluate the security of image hashing systems. Under this new framework, we model the hash values as random variables and quantify its uncertainty in terms of differential entropy. Using this security framework, we analyze the security of the proposed schemes and several existing representative methods for image hashing. We then examine the security versus robustness tradeoff and show that the proposed hashing methods can provide excellent security and robustness.  相似文献   

12.
We introduce a method that enables scalable similarity search for learned metrics. Given pairwise similarity and dissimilarity constraints between some examples, we learn a Mahalanobis distance function that captures the examples' underlying relationships well. To allow sublinear time similarity search under the learned metric, we show how to encode the learned metric parameterization into randomized locality-sensitive hash functions. We further formulate an indirect solution that enables metric learning and hashing for vector spaces whose high dimensionality makes it infeasible to learn an explicit transformation over the feature dimensions. We demonstrate the approach applied to a variety of image data sets, as well as a systems data set. The learned metrics improve accuracy relative to commonly used metric baselines, while our hashing construction enables efficient indexing with learned distances and very large databases.  相似文献   

13.
针对区块链环境中海量高维的数据使得推荐性能低下的问题,通过对局部敏感哈希算法的优化,降低其在近邻搜索过程中带来的额外计算和存储开销.利用数据分布的主成分减少传统LSH中不良捕获的投影方向,同时对投影向量权重进行量化,以减少哈希表和哈希函数的使用;通过对哈希桶的间隔进行调整,并且根据冲突次数的大小进一步细化查询结果集,以...  相似文献   

14.
Learning-based hashing methods are becoming the mainstream for large scale visual search. They consist of two main components: hash codes learning for training data and hash functions learning for encoding new data points. The performance of a content-based image retrieval system crucially depends on the feature representation, and currently Convolutional Neural Networks (CNNs) has been proved effective for extracting high-level visual features for large scale image retrieval. In this paper, we propose a Multiple Hierarchical Deep Hashing (MHDH) approach for large scale image retrieval. Moreover, MHDH seeks to integrate multiple hierarchical non-linear transformations with hidden neural network layer for hashing code generation. The learned binary codes represent potential concepts that connect to class labels. In addition, extensive experiments on two popular datasets demonstrate the superiority of our MHDH over both supervised and unsupervised hashing methods.  相似文献   

15.
With the advance of internet and multimedia technologies, large-scale multi-modal representation techniques such as cross-modal hashing, are increasingly demanded for multimedia retrieval. In cross-modal hashing, three essential problems should be seriously considered. The first is that effective cross-modal relationship should be learned from training data with scarce label information. The second is that appropriate weights should be assigned for different modalities to reflect their importance. The last is the scalability of training process which is usually ignored by previous methods. In this paper, we propose Multi-graph Cross-modal Hashing (MGCMH) by comprehensively considering these three points. MGCMH is unsupervised method which integrates multi-graph learning and hash function learning into a joint framework, to learn unified hash space for all modalities. In MGCMH, different modalities are assigned with proper weights for the generation of multi-graph and hash codes respectively. As a result, more precise cross-modal relationship can be preserved in the hash space. Then Nyström approximation approach is leveraged to efficiently construct the graphs. Finally an alternating learning algorithm is proposed to jointly optimize the modality weights, hash codes and functions. Experiments conducted on two real-world multi-modal datasets demonstrate the effectiveness of our method, in comparison with several representative cross-modal hashing methods.  相似文献   

16.
Binary code is a kind of special representation of data. With the binary format, hashing framework can be built and a large amount of data can be indexed to achieve fast research and retrieval. Many supervised hashing approaches learn hash functions from data with supervised information to retrieve semantically similar samples. This kind of supervised information can be generated from external data other than pixels. Conventional supervised hashing methods assume a fixed relationship between the Hamming distance and the similar (dissimilar) labels. This assumption leads to too rigid requirement in learning and makes the similar and dissimilar pairs not distinguishable. In this paper, we adopt a large margin principle and define a Hamming margin to formulate such relationship. At the same time, inspired by support vector machine which achieves strong generalization capability by maximizing the margin of its decision surface, we propose a binary hash function in the same manner. A loss function is constructed corresponding to these two kinds of margins and is minimized by a block coordinate descent method. The experiments show that our method can achieve better performance than the state-of-the-art hashing methods.  相似文献   

17.
A visual simultaneous localization and mapping (SLAM) system usually contains a relocalization module to recover the camera pose after tracking failure. The core of this module is to establish correspondences between map points and key points in the image, which is typically achieved by local image feature matching. Since recently emerged binary features have orders of magnitudes higher extraction speed than traditional features such as scale invariant feature transform, they can be applied to develop a real-time relocalization module once an efficient method of binary feature matching is provided. In this paper, we propose such a method by indexing binary features with hashing. Being different from the popular locality sensitive hashing, the proposed method constructs the hash keys by an online learning process instead of pure randomness. Specifically, the hash keys are trained with the aim of attaining uniform hash buckets and high collision rates of matched feature pairs, which makes the method more efficient on approximate nearest neighbor search. By distributing the online learning into the simultaneous localization and mapping process, we successfully apply the method to SLAM relocalization. Experiments show that camera poses can be recovered in real time even when there are tens of thousands of landmarks in the map.  相似文献   

18.
Zhou  Wenhua  Liu  Huawen  Lou  Jungang  Chen  Xin 《Applied Intelligence》2022,52(13):14724-14738
Applied Intelligence - Locality sensitive hashing (LSH), one of the most popular hashing techniques, has attracted considerable attention for nearest neighbor search in the field of image...  相似文献   

19.
哈希图半监督学习方法及其在图像分割中的应用   总被引:2,自引:0,他引:2  
张晨光  李玉鑑 《自动化学报》2010,36(11):1527-1533
图半监督学习(Graph based semi-supervised learning, GSL)方法需要花费大量时间构造一个近邻图, 速度比较慢. 本文提出了一种哈希图半监督学习(Hash graph based semi-supervised learning, HGSL)方法, 该方法通过局部敏感的哈希函数进行近邻搜索, 可以有效降低图半监督学习方法所需的构图时间. 图像分割实验表明, 该方法一方面可以达到更好的分割效果, 使分割准确率提高0.47%左右; 另一方面可以大幅度减小分割时间, 以一幅大小为300像素×800像素的图像为例, 分割时间可减少为图半监督学习所需时间的28.5%左右.  相似文献   

20.
《Pattern recognition》2014,47(2):748-757
Recently hashing has become attractive in large-scale visual search, owing to its theoretical guarantee and practical success. However, most of the state-of-the-art hashing methods can only employ a single feature type to learn hashing functions. Related research on image search, clustering, and other domains has proved the advantages of fusing multiple features. In this paper we propose a novel multiple feature kernel hashing framework, where hashing functions are learned to preserve certain similarities with linearly combined multiple kernels corresponding to different features. The framework is not only compatible with general types of data and diverse types of similarities indicated by different visual features, but also general for both supervised and unsupervised scenarios. We present efficient alternating optimization algorithms to learn both the hashing functions and the optimal kernel combination. Experimental results on three large-scale benchmarks CIFAR-10, NUS-WIDE and a-TRECVID show that the proposed approach can achieve superior accuracy and efficiency over state-of-the-art methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号