首页 | 官方网站   微博 | 高级检索  
     

基于模态特异及模态共享特征信息的多模态细粒度检索
引用本文:李佩,陈乔松,陈鹏昌,邓欣,王进,朴昌浩.基于模态特异及模态共享特征信息的多模态细粒度检索[J].计算机工程,2022,48(11):62.
作者姓名:李佩  陈乔松  陈鹏昌  邓欣  王进  朴昌浩
作者单位:1. 重庆邮电大学 计算机科学与技术学院, 重庆 400065;2. 数据工程与认知计算重庆市重点实验室, 重庆 400065
基金项目:国家自然科学基金(61806033);国家社会科学基金西部项目(18XGL013)。
摘    要:跨模态检索的目标是用户给定任意一个样本作为查询样例,系统检索得到与查询样例相关的各个模态样本,多模态细粒度检索在跨模态检索基础上强调模态的数量至少大于两个,且待检索样本的分类标准为细粒度子类,存在多模态数据间的异构鸿沟及细粒度样本特征差异小等难题。引入模态特异特征及模态共享特征的概念,提出一种多模态细粒度检索框架MS2Net。使用分支网络及主干网络分别提取不同模态数据的模态特异特征及模态共享特征,将两种特征通过多模态特征融合模块进行充分融合,同时利用各个模态自身的特有信息及不同模态数据间的共性及联系,增加高维空间向量中包含的语义信息。针对多模态细粒度检索场景,在center loss函数的基础上提出multi-center loss函数,并引入类内中心来聚集同类别且同模态的样本,根据聚集类内中心来间接聚集同类别但模态不同的样本,同时消减样本间的异构鸿沟及语义鸿沟,增强模型对高维空间向量的聚类能力。在公开数据集FG-Xmedia上进行一对一与一对多的模态检索实验,结果表明,与FGCrossNet方法相比,MS2Net方法mAP指标分别提升65%和48%。

关 键 词:信息检索  多模态检索  细粒度检索  多模态表征学习  深度学习  
收稿时间:2021-11-09
修稿时间:2021-12-29

Multi-Modal Fine-Grained Retrieval Based on Modal Specific and Modal Shared Feature Information
LI Pei,CHEN Qiaosong,CHEN Pengchang,DENG Xin,WANG Jin,PIAO Changhao.Multi-Modal Fine-Grained Retrieval Based on Modal Specific and Modal Shared Feature Information[J].Computer Engineering,2022,48(11):62.
Authors:LI Pei  CHEN Qiaosong  CHEN Pengchang  DENG Xin  WANG Jin  PIAO Changhao
Affiliation:1. College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China;2. Chongqing Key Laboratory of Data Engineering and Visual Computing, Chongqing 400065, China
Abstract:The goal of cross-modal retrieval is that the user gives any sample as a query sample;then, the system retrieves and feeds back various modal samples related to the query sample.Multi-modal fine-grained retrieval emphasizes that the number of modalities is greater than two and the granularity of classification is the fine-grained sub-category.This paper introduces the concepts of modal specific features and modal shared features and proposes the MS2Net framework.The branch network and backbone network are used to extract the modal specific features and modal shared features of different modal data.Then, the two features are fully fused through the Multi-Modal Feature fusion Module(MMFM).Meanwhile, the semantic information contained in the high-dimensional space vector is greatly increased by using the unique information of each mode and the commonness and relationship between different modal data.In addition, for the multi-modal fine-grained retrieval scenario, based on center loss, this paper proposes multi-center loss, introduces the inner-class center to gather the samples of the same category and the same mode, and then indirectly gathers the samples of the same category but different modes by aggregating the inner-class center.This reduces the heterogeneous gap and semantic gap between the samples.It clearly enhances the clustering ability of the model to high-dimensional spatial vectors.Finally, the experimental results of one-to-one and one-to-multimodal retrieval on the FG-Xmedia public dataset show that, compared with the FGCrossNet method, the MS2Net method improves the mAP index by 65% and 48%, respectively.
Keywords:information retrieval  multi-modal retrieval  fine-grained retrieval  multi-modal representation learning  deep learning  
点击此处可从《计算机工程》浏览原始摘要信息
点击此处可从《计算机工程》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号