首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
In this paper, we consider the problem of multimedia document (MMD) semantics understanding and content-based cross-media retrieval. An MMD is a set of media objects of different modalities but carrying the same semantics and the content-based cross-media retrieval is a new kind of retrieval method by which the query examples and search results can be of different modalities. Two levels of manifolds are learned to explore the relationships among all the data in the level of MMD and in the level of media object respectively. We first construct a Laplacian media object space for media object representation of each modality and an MMD semantic graph to learn the MMD semantic correlations. The characteristics of media objects propagate along the MMD semantic graph and an MMD semantic space is constructed to perform cross-media retrieval. Different methods are proposed to utilize relevance feedback and experiment shows that the proposed approaches are effective.  相似文献   

2.
黄育  张鸿 《计算机应用》2017,37(4):1061-1064
针对不同模态数据对相同语义主题表达存在差异性,以及传统跨媒体检索算法忽略了不同模态数据能以合作的方式探索数据的内在语义信息等问题,提出了一种新的基于潜语义主题加强的跨媒体检索(LSTR)算法。首先,利用隐狄利克雷分布(LDA)模型构造文本语义空间,然后以词袋(BoW)模型来表达文本对应的图像;其次,使用多分类逻辑回归对图像和文本分类,用得到的基于多分类的后验概率表示文本和图像的潜语义主题;最后,利用文本潜语义主题去正则化图像的潜语义主题,使图像的潜语义主题得到加强,同时使它们之间的语义关联最大化。在Wikipedia数据集上,文本检索图像和图像检索文本的平均查准率为57.0%,比典型相关性分析(CCA)、SM(Semantic Matching)、SCM(Semantic Correlation Matching)算法的平均查准率分别提高了35.1%、34.8%、32.1%。实验结果表明LSTR算法能有效地提高跨媒体检索的平均查准率。  相似文献   

3.
The content-based cross-media retrieval is a new type of multimedia retrieval in which the media types of query examples and the returned results can be different. In order to learn the semantic correlations among multimedia objects of different modalities, the heterogeneous multimedia objects are analyzed in the form of multimedia document (MMD), which is a set of multimedia objects that are of different media types but carry the same semantics. We first construct an MMD semi-semantic graph (MMDSSG) by jointly analyzing the heterogeneous multimedia data. After that, cross-media indexing space (CMIS) is constructed. For each query, the optimal dimension of CMIS is automatically determined and the cross-media retrieval is performed on a per-query basis. By doing this, the most appropriate retrieval approach for each query is selected, i.e. different search methods are used for different queries. The query dependent search methods make cross-media retrieval performance not only accurate but also stable. We also propose different learning methods of relevance feedback (RF) to improve the performance. Experiment is encouraging and validates the proposed methods.  相似文献   

4.
With the explosion of multimedia data, it is usual that different multimedia data often coexist in web repositories. Accordingly, it is more and more important to explore underlying intricate cross-media correlation instead of single-modality distance measure so as to improve multimedia semantics understanding. Cross-media distance metric learning focuses on correlation measure between multimedia data of different modalities. However, the existence of content heterogeneity and semantic gap makes it very challenging to measure cross-media distance. In this paper, we propose a novel cross-media distance metric learning framework based on sparse feature selection and multi-view matching. First, we employ sparse feature selection to select a subset of relevant features and remove redundant features for high-dimensional image features and audio features. Secondly, we maximize the canonical coefficient during image-audio feature dimension reduction for cross-media correlation mining. Thirdly, we further construct a Multi-modal Semantic Graph to find embedded manifold cross-media correlation. Moreover, we fuse the canonical correlation and the manifold information into multi-view matching which harmonizes different correlations with an iteration process and build Cross-media Semantic Space for cross-media distance measure. The experiments are conducted on image-audio dataset for cross-media retrieval. Experiment results are encouraging and show that the performance of our approach is effective.  相似文献   

5.
Qi  Jinwei  Huang  Xin  Peng  Yuxin 《Multimedia Tools and Applications》2017,76(23):25109-25127

As a highlighting research topic in the multimedia area, cross-media retrieval aims to capture the complex correlations among multiple media types. Learning better shared representation and distance metric for multimedia data is important to boost the cross-media retrieval. Motivated by the strong ability of deep neural network in feature representation and comparison functions learning, we propose the Unified Network for Cross-media Similarity Metric (UNCSM) to associate cross-media shared representation learning with distance metric in a unified framework. First, we design a two-pathway deep network pretrained with contrastive loss, and employ double triplet similarity loss for fine-tuning to learn the shared representation for each media type by modeling the relative semantic similarity. Second, the metric network is designed for effectively calculating the cross-media similarity of the shared representation, by modeling the pairwise similar and dissimilar constraints. Compared to the existing methods which mostly ignore the dissimilar constraints and only use sample distance metric as Euclidean distance separately, our UNCSM approach unifies the representation learning and distance metric to preserve the relative similarity as well as embrace more complex similarity functions for further improving the cross-media retrieval accuracy. The experimental results show that our UNCSM approach outperforms 8 state-of-the-art methods on 4 widely-used cross-media datasets.

  相似文献   

6.
7.
代刚  张鸿 《计算机应用》2018,38(9):2529-2534
针对如何挖掘不同模态中具有相同语义的特征数据之间的内在相关性的问题,提出了一种基于语义相关性与拓扑关系(SCTR)的跨媒体检索算法。一方面,利用具有相同语义的多媒体数据之间的潜在相关性去构造多媒体语义相关超图;另一方面,挖掘多媒体数据的拓扑关系来构建多媒体近邻关系超图。通过结合多媒体数据语义相关性与拓扑关系去为每种媒体类型学习一个最优的投影矩阵,然后将多媒体数据的特征向量投影到一个共同空间,从而实现跨媒体检索。该算法在XMedia数据集上,对多项跨媒体检索任务的平均查准率为51.73%,与联合图正则化的异构度量学习(JGRHML)、跨模态相关传播(CMCP)、近邻的异构相似性度量(HSNN)、共同的表示学习(JRL)算法相比,分别提高了22.73、15.23、11.7、9.11个百分点。实验结果从多方面证明了该算法有效提高了跨媒体检索的平均查准率。  相似文献   

8.
Ou  Weihua  Xuan  Ruisheng  Gou  Jianping  Zhou  Quan  Cao  Yongfeng 《Multimedia Tools and Applications》2020,79(21-22):14733-14750

Cross-modal retrieval aims to search the semantically similar instances from the other modalities given a query from one modality. However, the differences of the distributions and representations between different modalities make that the similarity of different modalities can not be measured directly. To address this problem, in this paper, we propose a novel semantic consistent adversarial cross-modal retrieval (SC-ACMR), which learns semantic consistent representation for different modalities under adversarial learning framework by considering the semantic similarity from intra-modality and inter-modality. Specifically, for intra-modality, we minimize the intra-class distances. For the inter-modality, we require class center of different modalities with same semantic label to be as close as possible, and also minimize the distances between the samples and the class center with same semantic label from different modalities. Furthermore, we preserve the semantic similarity of transformed features of different modalities through a semantic similarity matrix. Comprehensive experiments on two benchmark datasets are conducted and the experimental results show that the proposed method have learned more compact semantic representations and achieved better performance than many existing methods in cross-modal retrieval.

  相似文献   

9.
针对传统深度文本聚类方法仅利用中间层的文本语义表示进行聚类,没有考虑到不同层次的神经网络学习到的不同文本语义表示以及中间层低维表示的特征稠密难以有效区分类簇的问题,提出一种基于多层次子空间语义融合的深度文本聚类(deep document clustering via muti-layer subspace semantic fusion,DCMSF)模型。该模型首先利用深度自编码器提取出文本不同层次的潜在语义表示;其次,设计一种多层子空间语义融合策略将不同层的语义表示非线性映射到不同子空间以得到融合语义,并用其进行聚类。另外,利用子空间聚类的自表示损失设计一种联合损失函数,用于监督模型参数更新。实验结果表明,DCMSF方法在性能上优于当前已有的多种主流深度文本聚类算法。  相似文献   

10.
跨媒体相关性推理与检索研究   总被引:1,自引:0,他引:1  
针对不同模态的多媒体数据之间难以度量跨媒体相关性的问题,提出了一种基于相关性推理的跨媒体检索方法,首先从相同模态内部(intra-media)的相似性和不同模态之间(cross-media)的相关性两个方面进行分析和量化,然后构造跨媒体关联图将相似性和相关性学习结果进行统一表达,以跨媒体关联图的最短路径为基础进行跨媒体检索,并提出相关反馈算法将用户交互中的先验知识融入到跨媒体关联图中,有效提高了跨媒体检索效率.该方法可以应用于针对用户提交查询样例的不同模态交叉检索系统.  相似文献   

11.
Although multimedia objects such as images, audios and texts are of different modalities, there are a great amount of semantic correlations among them. In this paper, we propose a method of transductive learning to mine the semantic correlations among media objects of different modalities so that to achieve the cross-media retrieval. Cross-media retrieval is a new kind of searching technology by which the query examples and the returned results can be of different modalities, e.g., to query images by an example of audio. First, according to the media objects features and their co-existence information, we construct a uniform cross-media correlation graph, in which media objects of different modalities are represented uniformly. To perform the cross-media retrieval, a positive score is assigned to the query example; the score spreads along the graph and media objects of target modality or MMDs with the highest scores are returned. To boost the retrieval performance, we also propose different approaches of long-term and short-term relevance feedback to mine the information contained in the positive and negative examples.  相似文献   

12.
目的 跨媒体检索旨在以任意媒体数据检索其他媒体的相关数据,实现图像、文本等不同媒体的语义互通和交叉检索。然而,"异构鸿沟"导致不同媒体数据的特征表示不一致,难以实现语义关联,使得跨媒体检索面临巨大挑战。而描述同一语义的不同媒体数据存在语义一致性,且数据内部蕴含着丰富的细粒度信息,为跨媒体关联学习提供了重要依据。现有方法仅仅考虑了不同媒体数据之间的成对关联,而忽略了数据内细粒度局部之间的上下文信息,无法充分挖掘跨媒体关联。针对上述问题,提出基于层级循环注意力网络的跨媒体检索方法。方法 首先提出媒体内-媒体间两级循环神经网络,其中底层网络分别建模不同媒体内部的细粒度上下文信息,顶层网络通过共享参数的方式挖掘不同媒体之间的上下文关联关系。然后提出基于注意力的跨媒体联合损失函数,通过学习媒体间联合注意力来挖掘更加精确的细粒度跨媒体关联,同时利用语义类别信息增强关联学习过程中的语义辨识能力,从而提升跨媒体检索的准确率。结果 在2个广泛使用的跨媒体数据集上,与10种现有方法进行实验对比,并采用平均准确率均值MAP作为评价指标。实验结果表明,本文方法在2个数据集上的MAP分别达到了0.469和0.575,超过了所有对比方法。结论 本文提出的层级循环注意力网络模型通过挖掘图像和文本的细粒度信息,能够充分学习图像和文本之间精确跨媒体关联关系,有效地提高了跨媒体检索的准确率。  相似文献   

13.
基于综合推理的多媒体语义挖掘和跨媒体检索   总被引:6,自引:0,他引:6  
为了更准确地进行跨媒体检索,需要挖掘、学习不同类型多媒体对象之间的语义关联,为此提出一种基于综合推理模型的多媒体语义挖掘和跨媒体检索技术.首先根据多媒体对象的底层特征构造推理源,根据多媒体对象的共生关系构造影响源场来进行综合推理,并构造出多媒体语义空间;然后针对不同检索例子,根据伪相关反馈为每一个检索例子自适应地选择不同的榆索方法进行跨媒体检索.为了处理检索例子不在训练集合内的情况,提出了两阶段学习方法完成检索;同时还提出了一种基于日志的长程反馈学习算法,以提高系统性能.实验结果证明,该技术能够准确地挖掘多媒体语义,多媒体文档检索和跨媒体检索效果准确_凡稳定.  相似文献   

14.
随着互联网与多媒体技术的迅猛发展,网络数据的呈现形式由单一文本扩展到包含图像、视频、文本、音频和3D模型等多种媒体,使得跨媒体检索成为信息检索的新趋势.然而,"异构鸿沟"问题导致不同媒体的数据表征不一致,难以直接进行相似性度量,因此,多种媒体之间的交叉检索面临着巨大挑战.随着深度学习的兴起,利用深度神经网络模型的非线性建模能力有望突破跨媒体信息表示的壁垒,但现有基于深度学习的跨媒体检索方法一般仅考虑图像和文本两种媒体数据之间的成对关联,难以实现更多种媒体的交叉检索.针对上述问题,提出了跨媒体深层细粒度关联学习方法,支持多达5种媒体类型数据(图像、视频、文本、音频和3D模型)的交叉检索.首先,提出了跨媒体循环神经网络,通过联合建模多达5种媒体类型数据的细粒度信息,充分挖掘不同媒体内部的细节信息以及上下文关联.然后,提出了跨媒体联合关联损失函数,通过将分布对齐和语义对齐相结合,更加准确地挖掘媒体内和媒体间的细粒度跨媒体关联,同时利用语义类别信息增强关联学习过程的语义辨识能力,提高跨媒体检索的准确率.在两个包含5种媒体的跨媒体数据集PKU XMedia和PKU XMediaNet上与现有方法进行实验对比,实验结果表明了所提方法的有效性.  相似文献   

15.
研究食品安全领域跨媒体数据的主题分析技术,融合多种媒体形式数据的语义,准确表达跨媒体文档的主题。由于食品安全事件相关多媒体数据的大量涌现,单一媒体的主题分析技术不能全面反映整个数据集的主题分布,存在语义缺失、主题空间不统一,语义融合困难等问题。提出一种跨媒体主题分析方法,首先以概率生成方法分别对文本和图像数据进行语义分析,然后利用跨媒体数据间的语义相关性进行视觉主题学习,建立视觉主题模型,进而实现视觉数据和文本主题之间的映射。仿真结果表明,跨媒体主题分析方法能够有效获取与图像语义相关的文本主题,且主题跟踪的准确度优于文本主题跟踪方法,能够为食品安全事件的监测提供依据。  相似文献   

16.
Multimodal representation learning has gained increasing importance in various real-world multimedia applications. Most previous approaches focused on exploring inter-modal correlation by learning a common or intermediate space in a conventional way, e.g. Canonical Correlation Analysis (CCA). These works neglected the exploration of fusing multiple modalities at higher semantic level. In this paper, inspired by the success of deep networks in multimedia computing, we propose a novel unified deep neural framework for multimodal representation learning. To capture the high-level semantic correlations across modalities, we adopted deep learning feature as image representation and topic feature as text representation respectively. In joint model learning, a 5-layer neural network is designed and enforced with a supervised pre-training in the first 3 layers for intra-modal regularization. The extensive experiments on benchmark Wikipedia and MIR Flickr 25K datasets show that our approach achieves state-of-the-art results compare to both shallow and deep models in multimodal and cross-modal retrieval.  相似文献   

17.
Learning element similarity matrix for semi-structured document analysis   总被引:3,自引:3,他引:0  
Capturing latent structural and semantic properties in semi-structured documents (e.g., XML documents) is crucial for improving the performance of related document analysis tasks. Structured Link Vector Mode (SLVM) is a representation recently proposed for modeling semi-structured documents. It uses an element similarity matrix to capture the latent relationships between XML elements—the constructing components of an XML document. In this paper, instead of applying heuristics to define the element similarity matrix, we propose to compute the matrix using the machine learning approach. In addition, we incorporate term semantics into SLVM using latent semantic indexing to enhance the model accuracy, with the element similarity learnability property preserved. For performance evaluation, we applied the similarity learning to k-nearest neighbors search and similarity-based clustering, and tested the performance using two different XML document collections. The SLVM obtained via learning was found to outperform significantly the conventional Vector Space Model and the edit-distance-based methods. Also, the similarity matrix, obtained as a by-product, can provide higher-level knowledge on the semantic relationships between the XML elements.
Xiaoou ChenEmail:
  相似文献   

18.
Cross-domain word representation aims to learn high-quality semantic representations in an under-resourced domain by leveraging information in a resourceful domain. However, most existing methods mainly transfer the semantics of common words across domains, ignoring the semantic relations among domain-specific words. In this paper, we propose a domain structure-based transfer learning method to learn cross-domain representations by leveraging the relations among domain-specific words. To accomplish this, we first construct a semantic graph to capture the latent domain structure using domain-specific co-occurrence information. Then, in the domain adaptation process, beyond domain alignment, we employ Laplacian Eigenmaps to ensure the domain structure is consistently distributed in the learned embedding space. As such, the learned cross-domain word representations not only capture shared semantics across domains, but also maintain the latent domain structure. We performed extensive experiments on two tasks, namely sentiment analysis and query expansion. The experiment results show the effectiveness of our method for tasks in under-resourced domains.  相似文献   

19.
20.
肖琳  陈博理  黄鑫  刘华锋  景丽萍  于剑 《软件学报》2020,31(4):1079-1089
自大数据蓬勃发展以来,多标签分类一直是令人关注的重要问题,在现实生活中有许多实际应用,如文本分类、图像识别、视频注释、多媒体信息检索等.传统的多标签文本分类算法将标签视为没有语义信息的符号,然而,在许多情况下,文本的标签是具有特定语义的,标签的语义信息和文档的内容信息是有对应关系的,为了建立两者之间的联系并加以利用,提出了一种基于标签语义注意力的多标签文本分类(LAbel Semantic Attention Multi-label Classification,简称LASA)方法,依赖于文档的文本和对应的标签,在文档和标签之间共享单词表示.对于文档嵌入,使用双向长短时记忆(bi-directional long short-term memory,简称Bi-LSTM)获取每个单词的隐表示,通过使用标签语义注意力机制获得文档中每个单词的权重,从而考虑到每个单词对当前标签的重要性.另外,标签在语义空间里往往是相互关联的,使用标签的语义信息同时也考虑了标签的相关性.在标准多标签文本分类的数据集上得到的实验结果表明,所提出的方法能够有效地捕获重要的单词,并且其性能优于当前先进的多标签文本分类算法.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号