共查询到20条相似文献,搜索用时 21 毫秒
1.
网络信息检索在当前互联网社会得到了广泛应用,但是其检索准确性却不容乐观,究其原因是割裂了检索关键词之间的概念联系。从一类限定领域的用户需求入手,以搜索引擎作为网络语料资源的访问接口,综合利用规则与统计的方法,生成查询需求的语义概念图。可将其作为需求分析的结果,导引后续的语义检索过程,提高用户查询与返回结果的相关性。实验结果表明,生成方法是有效可行的,对基于概念图的语义检索有一定的探索意义。 相似文献
2.
搜索引擎的一个标准是不同的用户用相同的查询条件检索时,返回的结果相同。为解决准确性问题,个性化搜索引擎被提出,它可以根据用户的不同个性化特征提供不同的搜索结果。然而,现有的方法更注重用户的长时记忆和独立的用户日志文件,从而降低了个性化搜索的有效性。获取用户短时记忆模型来提供准确有效的用户偏好的个性化搜索方法被广泛采用。首先,根据基于查询关键词的相关概念生成短期记忆模型;接着,基于用户的时序有效点击数据生成用户个性化模型;最后,在用户会话中引入了遗忘因子来优化用户个性化模型。实验结果表明,所提出的方法可以较好地表达用户信息需求,较为准确地构建用户的个性化模型。 相似文献
3.
User profiles play an important role in information retrieval system. In this paper, we propose a novel method for the acquisition of ontology-based user profiles. In the method, the ontology-based user profiles can maintain the representations of personal interest. In addition, user ontologies can be automatically constructed. The method can make user profiles strong expressive and less manually interfered. 相似文献
4.
文本分割在信息检索、摘要生成、问答系统、信息抽取等领域发挥着重要作用。在总结现有的国内外文本分割方法的基础上,提出了一种基于领域本体对文本进行线性分割的方法。该方法利用初始概念自动获取结构化语义概念集合,并根据获取的概念、属性及属性词在文本中出现的频次、位置和关系等因素为段落赋予语义标签,挖掘文本的子主题信息,将拥有相同语义标注信息的段落划分为相同语义段落,实现了文本不同子主题之间的分割。实验结果表明,该方法对于特定领域的文本分割的准确率、召回率以及F值分别达到了85%,90%和88%,分割效果能够满足实际应用需求,并优于现有的无需训练语料的文本分割方法。 相似文献
5.
Grouping video content into semantic segments and classifying semantic scenes into different types are the crucial processes
to content-based video organization, management and retrieval. In this paper, a novel approach to automatically segment scenes
and semantically represent scenes is proposed. Firstly, video shots are detected using a rough-to-fine algorithm. Secondly,
key-frames within each shot are selected adaptively with hybrid features, and redundant key-frames are removed by template
matching. Thirdly, spatio-temporal coherent shots are clustered into the same scene based on the temporal constraint of video
content and visual similarity between shot activities. Finally, under the full analysis of typical characters on continuously
recorded videos, scene content is semantically represented to satisfy human demand on video retrieval. The proposed algorithm
has been performed on various genres of films and TV program. Promising experimental results show that the proposed method
makes sense to efficient retrieval of interesting video content.
相似文献
Yuncai LiuEmail: |
6.
The point cloud is a common 3D representation widely applied in CAX engineering due to its simple data representation and rich semantic information. However, discrete and unordered 3D data structures make it difficult for point clouds to understand semantic information and make them unsuitable for applying standard operators. In this paper, to enhance machine perception of 3D semantic information, we propose a novel approach that can not only directly process point cloud data by a novel convolution-like operator but also dynamically pay attention to local semantic information. First, we design a novel dynamic local self-attention mechanism that can dynamically and flexibly focus on top-level information of the receptive field to learn and understand subtle features. Second, we propose a dynamic self-attention learning block, which adopts the proposed dynamic local self-attention learning convolution operation to directly deal with disordered and irregular point clouds to learn global and local point features while dynamically learning the important local semantic information. Third, the proposed operation can be compatibly applied as an independent component in popular architectures to improve the perception of local semantic information. Numerous experiments demonstrate the advantage of our method for point cloud tasks on datasets from both CAD data and scan data of complex real-world scenes. 相似文献
7.
提出了一个基于J2ME平台的手机语音控制系统,该系统结合语音识别和自然语言处理技术,处理手机用户的语音输入,抽取语义信息并显示在手机终端.本系统采用C/S架构,客户端为手机终端,服务器端为PC.在客户端,收集语音输入流并发送给服务器,接收服务器发回的语义信息并显示;在服务器端,接收手机客户端传来的语音流,进行语音识别,自然语言处理,将处理的语义信息发回客户端.该系统能处理同一种手机控制命令的多种自然语言表达方式,能极大方便手机用户的使用. 相似文献
8.
Although many advances have been made in semantic portal research, often links to relevant pages are not shown and the same content/links are presented to users that have different background and interests. This paper introduces a novel semantic portal, SEMPort, to support the browsing of users based on personalization and enriched semantic hyperlinks. Our semantic portal makes a novel contribution by integrating adaptive hypermedia methods and enriched semantic hyperlinks into semantic portal technologies to provide better navigation. SEMPort supports different personalization such as adaptive link sorting and adaptive link annotation based on interests of individual users. Enriched semantic links are also supplied to guide users to relevant pages. In addition, easy-to-use and real-time content maintenance mechanisms are provided, which is important for the evolution of the content. Evaluations carried out to assess the semantic portal include performance evaluations, interface usability using Nielsen's heuristics and empirical user studies. This paper also provides an overview and comparison to the state-of-the-art as well as outlining future directions for semantic portals. 相似文献
9.
Practical approaches for managing and supporting the life-cycle of semantic content on the Web of Data have recently made quite some progress. In particular in the area of the user-friendly manual and semi-automatic creation of rich semantic content we have observed recently a large number of approaches and systems being described in the literature. With this survey we aim to provide an overview on the rapidly emerging field of Semantic Content Authoring (SCA). We conducted a systematic literature review comprising a thorough analysis of 31 primary studies out of 175 initially retrieved papers addressing the semantic authoring of textual content. We obtained a comprehensive set of quality attributes for SCA systems together with corresponding user interface features suggested for their realization. The quality attributes include aspects such as usability, automation, generalizability, collaboration, customizability and evolvability. The primary studies were surveyed in the light of these quality attributes and we performed a thorough analysis of four SCA systems. The proposed quality attributes and UI features facilitate the evaluation of existing approaches and the development of novel more effective and intuitive semantic authoring interfaces. 相似文献
10.
11.
The rapid growth of the Linked Open Data cloud, as well as the increasing ability to lift relational enterprise datasets to a semantic, ontology-based level means that vast amounts of information are now available in a representation that closely matches the conceptualizations of the potential users of this information. This makes it interesting to create ontology based, user-oriented tools for searching and exploring this data. Although initial efforts were intended for tech users with knowledge of SPARQL/RDF, there are ongoing proposals designed for lay users. One of the most promising approaches is to use visual query interfaces, but more user studies are needed to assess their effectiveness. In this paper, we compare the effect on usability of two important paradigms for ontology-based query interfaces: form-based and graph-based interfaces. In order to reduce the number of variables affecting the comparison, we performed a user study with two state-of-the-art query tools developed by ourselves, sharing a large part of the code base: the graph-based tool OptiqueVQS*, and the form-based tool PepeSearch. We evaluated these tools in a formal comparison study with 15 participants searching a Linked Open Data version of the Norwegian Company Registry. Participants had to respond to 6 non-trivial search tasks using alternately OptiqueVQS* and PepeSearch. Even without previous training, retrieval performance and user confidence were very high, thus suggesting that both interface designs are effective for searching RDF datasets. Expert searchers had a clear preference for the graph-based interface, and mainstream searchers obtained better performance and confidence with the form-based interface. While a number of participants spontaneously praised the capability of the graph interface for composing complex queries, our results evidence that graph interfaces are difficult to grasp. In contrast, form interfaces are more learnable and relieve problems with disorientation for mainstream users. We have also observed positive results introducing faceted search and dynamic term suggestion in semantic search interfaces. 相似文献
12.
Magpie has been one of the first truly effective approaches to bringing semantics into the web browsing experience. The key innovation brought by Magpie was the replacement of a manual annotation process by an automatically associated ontology-based semantic layer over web resources, which ensured added value at no cost for the user. Magpie also differs from older open hypermedia systems: its associations between entities in a web page and semantic concepts from an ontology enable link typing and subsequent interpretation of the resource. The semantic layer in Magpie also facilitates locating semantic services and making them available to the user, so that they can be manually activated by a user or opportunistically triggered when appropriate patterns are encountered during browsing. In this paper we track the evolution of Magpie as a technology for developing open and flexible Semantic Web applications. Magpie emerged from our research into user-accessible Semantic Web, and we use this viewpoint to assess the role of tools like Magpie in making semantic content useful for ordinary users. We see such tools as crucial in bootstrapping the Semantic Web through the automation of the knowledge generation process. 相似文献
13.
基于语义理解的智能搜索引擎研究 总被引:1,自引:0,他引:1
本文提出了一种基于自然语言理解的搜索引擎模型.它的核心技术是基于自然语言理解的相关技术,包括从 关键词、提问方式、提问重点三个层次对用户查询进行语义分析、特征向量提取及基于该思想建立了面向Web网页内容 的特征库,提出返回文档排序的算法,基于Lucene全文索引工具包建立了搜索引擎,对库中已收入的特征词进行了查询 测试,查准率为86.7%.实验表明,该模型基本实现了对查询短语的理解,对提高搜索引擎的查准率有显著的效果. 相似文献
14.
人像抠图算法是许多人像图像处理方法的核心,而人像三分图的准确性直接影响抠图的效果.提出一种通过关键点估计人脸尺度,根据尺度控制三分图生成网络产生标准化的人像三分图,从而提高人像抠图结果的方法.同时构建一个包含19118幅人像的数据集用于训练和测试模型,并提出一种多级微调的方式来训练模型,以降低训练难度并获得更好的效果.... 相似文献
15.
Deep-learning-based segmentation methods have shown great success across many medical image applications. However, the custom training paradigms suffer from a well-known constraint of the requirement of pixel-wise annotations, which is labor-intensive, especially when they are required to learn new classes incrementally. Contemporary incremental learning focuses on dealing with catastrophic forgetting in image classification and object detection. However, this work aims to promote the performance of the current model to learn new classes with the help of the previous model in the context of incremental learning of instance segmentation. It enormously benefits the current model when the labeled data is limited because of the high labor intensity of manual labeling. In this paper, on the Diabetic Retinopathy (DR) lesion segmentation problem, a novel incremental segmentation paradigm is proposed to distill the knowledge of the previous model to improve the current model. Remarkably, we propose various approaches working on the class-based alignment of the probability maps of the current and the previous model, accounting for the difference between the background classes of the two models. The experimental evaluation of DR lesion segmentation shows the effectiveness of the proposed approaches. 相似文献
16.
提出了一种基于目标识别与显著性检测的图像场景多对象分割方法。该方法的步骤包括:在图像训练集上训练语义对象的检测器,用来检测输入图像中对象的位置,标定对象的包围盒;对输入的图像进行过分割处理,得到超像素集合,根据包围盒的位置和超像素的语义概率值计算兴趣区域;在3种稠密尺度上进行场景显著性检测,得到输入图像的显著图;在兴趣区域内计算超像素的邻接关系,形成邻接矩阵,构建条件随机场模型,将多对象分割问题转化成多类别标记问题,每一个对象是一种类别;以每个超像素作为场模型的节点,超像素的邻接关系对应场模型中节点之间的连接关系,将显著性和图像特征转化为节点和边的权重值;利用图割算法,在条件随机场模型上进行优化,迭代终止时得到像素的对象标记结果,从而实现对多个对象的分割。实验结果表明该方法效果较好。 相似文献
17.
In spite of significant improvements in video data retrieval, a system has not yet been developed that can adequately respond
to a user’s query. Typically, the user has to refine the query many times and view query results until eventually the expected
videos are retrieved from the database. The complexity of video data and questionable query structuring by the user aggravates
the retrieval process. Most previous research in this area has focused on retrieval based on low-level features. Managing
imprecise queries using semantic (high-level) content is no easier than queries based on low-level features due to the absence
of a proper continuous distance function. We provide a method to help users search for clips and videos of interest in video
databases. The video clips are classified as interesting and uninteresting based on user browsing. The attribute values of clips are classified by commonality, presence, and frequency within each
of the two groups to be used in computing the relevance of each clip to the user’s query. In this paper, we provide an intelligent
query structuring system, called I-Quest, to rank clips based on user browsing feedback, where a template generation from the set of interesting and uninteresting
sets is impossible or yields poor results.
相似文献
Ramazan Savaş Aygün (Corresponding author)Email: |
18.
对于P2P语义覆盖网络,语义信息的维护和智能路径的选择是实现的难点。根据小世界原理,提出了一种新的基于节点分类划分的P2P语义路由模型。通过建立节点本体来描述节点的网络结构和节点下的内容项,在此基础上创建了路由消息格式和节点分类划分的方法,然后创建了支持内容语义查询的节点内相关性内容查询算法和节点间消息路由算法。通过实验对比,该语义路由模型能够提高P2P系统下的内容查找速度并且能够显著降低占用的网络带宽。 相似文献
19.
20.
In this paper we address the problem of providing an order of relevance, or ranking, among entities’ properties used in RDF datasets, Linked Data and SPARQL endpoints. We first motivate the importance of ranking RDF properties by providing two killer applications for the problem, namely property tagging and entity visualization. Moved by the desiderata of these applications, we propose to apply Machine Learning to Rank (MLR) techniques to the problem of ranking RDF properties. Our devised solution is based on a deep empirical study of all the dimensions involved: feature selection, MLR algorithm and Model training. The major advantages of our approach are the following: (a) flexibility/personalization, as the properties’ relevance can be user-specified by personalizing the training set in a supervised approach, or set by a novel automatic classification approach based on SWiPE; (b) speed, since it can be applied without computing frequencies over the whole dataset, leveraging existing fast MLR algorithms; (c) effectiveness, as it can be applied even when no ontology data is available by using novel dataset-independent features; (d) precision, which is high both in terms of f-measure and Spearman’s rho. Experimental results show that the proposed MLR framework outperform the two existing approaches found in literature which are related to RDF property ranking. 相似文献