首页 | 官方网站   微博 | 高级检索  
     

一种等量约束聚类的无监督蒸馏哈希图像检索方法
引用本文:苗壮,王亚鹏,李阳,张睿,王家宝.一种等量约束聚类的无监督蒸馏哈希图像检索方法[J].计算机应用研究,2023,40(2).
作者姓名:苗壮  王亚鹏  李阳  张睿  王家宝
作者单位:陆军工程大学,陆军工程大学,陆军工程大学,陆军工程大学,陆军工程大学
基金项目:国家自然科学基金资助项目;国家重点研发计划资助项目;江苏省自然科学基金资助项目;中国博士后科学基金资助项目
摘    要:为了进一步降低无监督深度哈希检索任务中的伪标签噪声,提出了一种等量约束聚类的无监督蒸馏哈希图像检索方法。该方法主要分为两个阶段,在第一阶段中,主要对无标签图像进行软伪标签标注,用于第二阶段监督哈希特征学习,通过所提等量约束聚类算法,在软伪标签标注过程中可以有效降低伪标签中的噪声;在第二阶段中,主要对学生哈希网络进行训练,用于提取图像哈希特征。通过所提出的无监督蒸馏哈希方法,利用图像软伪标签指导哈希特征学习,进一步提高了哈希检索性能,实现了高效的无监督哈希图像检索。为了评估所提方法的有效性,在CIFAR-10、FLICKR25K和EuroSAT三个公开数据集上进行了实验,并与其他先进方法进行了比较。在CIFAR-10数据集上,与TBH方法相比,所提方法检索精度平均提高12.7%;在FLICKR25K数据集上,与DistillHash相比,所提方法检索精度平均提高1.0%;在EuroSAT数据集上,与ETE-GAN相比,所提方法检索精度平均提高16.9%。在三个公开数据集上进行的实验结果表明,所提方法能够实现高性能的无监督哈希检索,且对各类数据均有较好的适应性。

关 键 词:无监督哈希    图像检索    聚类    知识蒸馏    自监督学习
收稿时间:2022/6/7 0:00:00
修稿时间:2023/1/15 0:00:00

Unsupervised distillation hashing image retrieval method based on equivalent constraint clustering
Miao Zhuang,Wang Yapeng,Li Yang,Zhang Rui and Wang Jiabao.Unsupervised distillation hashing image retrieval method based on equivalent constraint clustering[J].Application Research of Computers,2023,40(2).
Authors:Miao Zhuang  Wang Yapeng  Li Yang  Zhang Rui and Wang Jiabao
Affiliation:Army Engineering University of PLA,,,,
Abstract:In order to reduce the noise ratio of pseudo labels in unsupervised deep hash learning, this paper proposed a novel two-stage unsupervised distillation hashing method based on equivalent constraint clustering(UDH-ECC). The main idea of this method was to utilize a robust clustering algorithm to annotate unlabeled images, which could get better soft pseudo labels for hash learning. To be specific, in the first stage, it selected a pre-trained teacher network to extract deep image features. Then, it clustered the deep image features by the proposed equivalent constraint clustering algorithm, than assigned hard pseudo labels to unlabeled images by the clustering results. Benefiting from the high accuracy of the hard pseudo labels, this method fine-tuned the teacher network to further adjust the unlabeled dataset. In the second stage, it utilized the predictive probability distribution produced from the fine-tuned teacher network as the soft pseudo labels to train the student hashing network. To further reduce the noise in soft pseudo labels, this paper proposed a distillation hashing method to convert noisy labels into clean hash codes. This paper compared the proposed method with other twelve state-of-the-art methods on three public datasets. The proposed method can outperform other state-of-the-art methods by a large margin, which is 12.7% higher than the TBH method on CIFAR-10, 1.0% higher than the DistillHash method on FLICKR25K, and 16.9% higher than the ETE-GAN method on EuroSAT. Comprehensive experimental results show that the proposed method not only has high performance, but also has good adaptability to variety datasets.
Keywords:unsupervised hashing  image retrieval  clustering  knowledge distillation  self-supervised learning
点击此处可从《计算机应用研究》浏览原始摘要信息
点击此处可从《计算机应用研究》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号