首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
陈海燕  徐峥  张辉 《计算机科学》2016,43(2):277-282
搜索引擎的一个标准是不同的用户用相同的查询条件检索时,返回的结果相同。为解决准确性问题,个性化搜索引擎被提出,它可以根据用户的不同个性化特征提供不同的搜索结果。然而,现有的方法更注重用户的长时记忆和独立的用户日志文件,从而降低了个性化搜索的有效性。获取用户短时记忆模型来提供准确有效的用户偏好的个性化搜索方法被广泛采用。首先,根据基于查询关键词的相关概念生成短期记忆模型;接着,基于用户的时序有效点击数据生成用户个性化模型;最后,在用户会话中引入了遗忘因子来优化用户个性化模型。实验结果表明,所提出的方法可以较好地表达用户信息需求,较为准确地构建用户的个性化模型。  相似文献   

2.
The exponential growth of information on the Web has introduced new challenges for building effective search engines. A major problem of web search is that search queries are usually short and ambiguous, and thus are insufficient for specifying the precise user needs. To alleviate this problem, some search engines suggest terms that are semantically related to the submitted queries so that users can choose from the suggestions the ones that reflect their information needs. In this paper, we introduce an effective approach that captures the user's conceptual preferences in order to provide personalized query suggestions. We achieve this goal with two new strategies. First, we develop online techniques that extract concepts from the web-snippets of the search result returned from a query and use the concepts to identify related queries for that query. Second, we propose a new two-phase personalized agglomerative clustering algorithm that is able to generate personalized query clusters. To the best of the authors' knowledge, no previous work has addressed personalization for query suggestions. To evaluate the effectiveness of our technique, a Google middleware was developed for collecting clickthrough data to conduct experimental evaluation. Experimental results show that our approach has better precision and recall than the existing query clustering methods.  相似文献   

3.
Thousands of users issue keyword queries to the Web search engines to find information on a number of topics. Since the users may have diverse backgrounds and may have different expectations for a given query, some search engines try to personalize their results to better match the overall interests of an individual user. This task involves two great challenges. First the search engines need to be able to effectively identify the user interests and build a profile for every individual user. Second, once such a profile is available, the search engines need to rank the results in a way that matches the interests of a given user. In this article, we present our work towards a personalized Web search engine and we discuss how we addressed each of these challenges. Since users are typically not willing to provide information on their personal preferences, for the first challenge, we attempt to determine such preferences by examining the click history of each user. In particular, we leverage a topical ontology for estimating a user’s topic preferences based on her past searches, i.e. previously issued queries and pages visited for those queries. We then explore the semantic similarity between the user’s current query and the query-matching pages, in order to identify the user’s current topic preference. For the second challenge, we have developed a ranking function that uses the learned past and current topic preferences in order to rank the search results to better match the preferences of a given user. Our experimental evaluation on the Google query-stream of human subjects over a period of 1 month shows that user preferences can be learned accurately through the use of our topical ontology and that our ranking function which takes into account the learned user preferences yields significant improvements in the quality of the search results.  相似文献   

4.
Search engine users often encounter the difficulty of phrasing the precise query that could lead to satisfactory search results. Query recommendation is considered an effective assistant in enhancing keyword-based queries in search engines and Web search software. In this paper, we present a Query-URL Bipartite based query reCommendation approach, called QUBiC. It utilizes the connectivity of a query-URL bipartite graph to recommend related queries and can significantly improve the accuracy and effectiveness of personalized query recommendation systems comparing with the conventional pairwise similarity based approach. The main contribution of the QUBiC approach is its three-phase framework for personalized query recommendations. The first phase is the preparation of queries and their search results returned by a search engine, which generates a historical query-URL bipartite collection. The second phase is the discovery of similar queries by extracting a query affinity graph from the bipartite graph, instead of operating on the original bipartite graph directly using biclique-based approach or graph clustering. The query affinity graph consists of only queries as its vertices and its edges are weighted according to a query-URL vector based similarity (dissimilarity) measure. The third phase is the ranking of similar queries. We devise a novel rank mechanism for ordering the related queries based on the merging distances of a hierarchical agglomerative clustering (HAC). By utilizing the query affinity graph and the HAC-based ranking, we are able to capture the propagation of similarity from query to query by inducing an implicit topical relatedness between queries. Furthermore, the flexibility of the HAC strategy makes it possible for users to interactively participate in the query recommendation process, and helps to bridge the gap between the determinacy of actual similarity values and the indeterminacy of users’ information needs, allowing the lists of related queries to be changed from user to user and query to query, thus adaptively recommending related queries on demand. Our experimental evaluation results show that the QUBiC approach is highly efficient and more effective compared to the conventional query recommendation systems, yielding about 13.3 % as the most improvement in terms of precision.  相似文献   

5.
Experienced users who query search engines have a complex behavior. They explore many topics in parallel, experiment with query variations, consult multiple search engines, and gather information over many sessions. In the process they need to keep track of search context — namely useful queries and promising result links, which can be hard. We present an extension to search engines called SearchPad that makes it possible to keep track of ‘search context' explicitly. We describe an efficient implementation of this idea deployed on four search engines: AltaVista, Excite, Google and Hotbot. Our design of SearchPad has several desirable properties: (i) portability across all major platforms and browsers; (ii) instant start requiring no code download or special actions on the part of the user; (iii) no server side storage; and (iv) no added client–server communication overhead. An added benefit is that it allows search services to collect valuable relevance information about the results shown to the user. In the context of each query SearchPad can log the actions taken by the user, and in particular record the links that were considered relevant by the user in the context of the query. The service was tested in a multi-platform environment with over 150 users for 4 months and found to be usable and helpful. We discovered that the ability to maintain search context explicitly seems to affect the way people search. Repeat SearchPad users looked at more search results than is typical on the Web, suggesting that availability of search context may partially compensate for non-relevant pages in the ranking.  相似文献   

6.
In this paper, we propose CYBER, a CommunitY Based sEaRch engine, for information retrieval utilizing community feedback information in a DHT network. In CYBER, each user is associated with a set of user profiles that capture his/her interests. Likewise, a document is associated with a set of profiles—one for each indexed term. A document profile is updated by users who query on the term and consider the document as a relevant answer. Thus, the profile acts as a consolidation of users feedback from the same community, and reflects their interests. In this way, as one user finds a document to be relevant, another user in the same community issuing a similar query will benefit from the feedback provided by the earlier user. Hence, the search quality in terms of both precision and recall is improved. Moreover, we further improve the effectiveness of CYBER by introducing an index tuning technique. By choosing the indexing terms more carefully, community-based relevance feedback is utilized in both building/refining indices and re-evaluating queries. We first propose a naive scheme, CYBER+, which involves an index tuning technique based on past queries only, and then re-evaluates queries in a separate step. We then propose a more complex scheme, CYBER+ +, which refines its index based on both past queries and relevance feedback. As the index is built with more selective and accurate terms, the search performance is further improved. We conduct a comprehensive experimental study and the results show the effectiveness of our schemes.  相似文献   

7.
网络上的专业搜索引擎数量众多,普通用户在选择时往往无所适从。文章提出了一个自动的查询导向系统,可以将用户查询自动导向到合适的专业搜索引擎,解决了这个矛盾。  相似文献   

8.
Engineers create engineering documents with their own terminologies, and want to search existing engineering documents quickly and accurately during a product development process. Keyword-based search methods have been widely used due to their ease of use, but their search accuracy has been often problematic because of the semantic ambiguity of terminologies in engineering documents and queries. The semantic ambiguity can be alleviated by using a domain ontology. Also, if queries are expanded to incorporate the engineer’s personalized information needs, the accuracy of the search result would be improved. Therefore, we propose a framework to search engineering documents with less semantic ambiguity and more focus on each engineer’s personalized information needs. The framework includes four processes: (1) developing a domain ontology, (2) indexing engineering documents, (3) learning user profiles, and (4) performing personalized query expansion and retrieval. A domain ontology is developed based on product structure information and engineering documents. Using the domain ontology, terminologies in documents are disambiguated and indexed. Also, a user profile is generated from the domain ontology. By user profile learning, user’s interests are captured from the relevant documents. During a personalized query expansion process, the learned user profile is used to reflect user’s interests. Simultaneously, user’s searching intent, which is implicitly inferred from the user’s task context, is also considered. To retrieve relevant documents, an expanded query in which both user’s interests and intents are reflected is then matched against the document collection. The experimental results show that the proposed approach can substantially outperform both the keyword-based approach and the existing query expansion method in retrieving engineering documents. Reflecting a user’s information needs precisely has been identified to be the most important factor underlying this notable improvement.  相似文献   

9.
Abstract: Content analysis of search engine user queries is an important task, since successful exploitation of the content of queries can result in the design of efficient information retrieval algorithms for more efficient search engines. Identification of topic changes within a user search session is a key issue in content analysis of search engine user queries. This study proposes an artificial neural network application in the area of search engine research to automatically identify topic changes in a user session by using statistical characteristics of queries, such as time intervals and query reformulation patterns. Sample data logs from the FAST and Excite search engines are selected to train the neural network and then the neural network is used to identify topic changes in the data log. As a result, almost all the performance measures yielded favourable results.  相似文献   

10.
In this paper, we tackle the private information retrieval (PIR) problem associated with the use of Internet search engines. We address the desire for a user to retrieve information from the Web without the search provider learning about it. Traditional PIR protocols present two main shortcomings for their application: (i) They assume cooperation by the database, which is not affordable for a real‐world search engine like Google and (ii) their computational complexity is linear in the size of the database, which is unfeasible in the case of the Web. More recent approaches relax PIR conditions to overcome these limitations and present some level of privacy. Mostly, they aim to distort server logs regardless of the loss of information that is involved. Server logs are used by search engines for profiling and, thereby, provide personalized results. This becomes a user's need given the growth of the Web and can also be used for targeted advertising. This study focuses on a noncooperative agent for private search that considers profiling as valuable data used for both sides of the search process. It is based on the assumption that the user's identity is formed by the union of various areas of interests or facets. Managing the HTTP connections properly, submitted queries are mapped to different server logs according to these facets. The rationale is that these logs cannot be used for tracing the user while they are still helpful for profiling. We present a personalized query classification approach based on the user's browsing history and to provide empirical results; we developed an attacking algorithm against the agent that shows that the disclosure risk is reduced.  相似文献   

11.
基于P2P的个性化Web信息检索   总被引:2,自引:0,他引:2  
为了克服Web搜索引擎在可扩展性、协作性和个性化等方面存在的不足,提出了一种基于Peer to Peer 的全分布、协作式、自组织的个性化Web信息检索,定义了以查询主题为中心进行主题聚类、数据组织和查询路由的用户协作共享策略,设计了协作生成用户兴趣列表向量、对相似语义查询进行主题聚类和更新、基于查询集建立倒排索引以及基于查询主题进行语义路由等算法和机制,以提供人性化、协作式、个性化的搜索。模拟实验表明,原型系统可以加快查询速度,减轻网络负荷,提高搜索的准确率。  相似文献   

12.
dentifying ambiguous queries is crucial to research on personalized Web search and search result diversity. Intuitively, query logs contain valuable information on how many intentions users have when issuing a query. However, previous work showed user clicks alone are misleading in judging a query as being ambiguous or not. In this paper, we address the problem of learning a query ambiguity model by using search logs. First, we propose enriching a query by mining the documents clicked by users and the relevant follow up queries in a session. Second, we use a text classifier to map the documents and the queries into predefined categories. Third, we propose extracting features from the processed data. Finally, we apply a state-of-the-art algorithm, Support Vector Machine (SVM), to learn a query ambiguity classifier. Experimental results verify that the sole use of click based features or session based features perform worse than the previous work based on top retrieved documents. When we combine the two sets of features, our proposed approach achieves the best effectiveness, specifically 86% in terms of accuracy. It significantly improves the click based method by 5.6% and the session based method by 4.6%.  相似文献   

13.
Keyword‐based search engines such as Google? index Web pages for human consumption. Sophisticated as such engines have become, surveys indicate almost 25% of Web searchers are unable to find useful results in the first set of URLs returned (Technology Review, March 2004). The lack of machine‐interpretable information on the Web limits software agents from matching human searches to desirable results. Tim Berners‐Lee, inventor of the Web, has architected the Semantic Web in which machine‐interpretable information provides an automated means to traversing the Web. A necessary cornerstone application is the search engine capable of bringing the Semantic Web together into a searchable landscape. We implemented a Semantic Web Search Engine (SWSE) that performs semantic search, providing predictable and accurate results to queries. To compare keyword search to semantic search, we constructed the Google CruciVerbalist (GCV), which solves crossword puzzles by reformulating clues into Google queries processed via the Google API. Candidate answers are extracted from query results. Integrating GCV with SWSE, we quantitatively show how semantic search improves upon keyword search. Mimicking the human brain's ability to create and traverse relationships between facts, our techniques enable Web applications to ‘think’ using semantic reasoning, opening the door to intelligent search applications that utilize the Semantic Web. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

14.
The general public is increasingly using search engines to seek information on risks and threats. Based on a search log from a large search engine, spanning three months, this study explores user patterns of query submission and subsequent clicks in sessions, for two important risk related topics, healthcare and information security, and compares them to other randomly sampled sessions. We investigate two session-level metrics reflecting users' interactivity with a search engine: session length and query click rate. Drawing from information foraging theory, we find that session length can be characterized well by the Inverse Gaussian distribution. Among three types of sessions on different topics (healthcare, information security, and other randomly sampled sessions), we find that healthcare sessions have the most queries and the highest query click rate, and information security sessions have the lowest query click rate. In addition, sessions initiated by the users with greater search engine activity level tend to have more queries and higher query click rates. Among three types of sessions, search engine activity level shows the strongest effect on query click rate for information security sessions and weakest for healthcare sessions. We discuss theoretical and practical implications of the study.  相似文献   

15.
Given a user keyword query, current Web search engines return a list of individual Web pages ranked by their "goodness" with respect to the query. Thus, the basic unit for search and retrieval is an individual page, even though information on a topic is often spread across multiple pages. This degrades the quality of search results, especially for long or uncorrelated (multitopic) queries (in which individual keywords rarely occur together in the same document), where a single page is unlikely to satisfy the user's information need. We propose a technique that, given a keyword query, on the fly generates new pages, called composed pages, which contain all query keywords. The composed pages are generated by extracting and stitching together relevant pieces from hyperlinked Web pages and retaining links to the original Web pages. To rank the composed pages, we consider both the hyperlink structure of the original pages and the associations between the keywords within each page. Furthermore, we present and experimentally evaluate heuristic algorithms to efficiently generate the top composed pages. The quality of our method is compared to current approaches by using user surveys. Finally, we also show how our techniques can be used to perform query-specific summarization of Web pages.  相似文献   

16.
We present WebACE, an agent for exploring and categorizing documents onthe World Wide Web based on a user profile. The heart of the agent is anunsupervised categorization of a set of documents, combined with a processfor generating new queries that is used to search for new relateddocuments and for filtering the resulting documents to extract the onesmost closely related to the starting set. The document categories are notgiven a priori. We present the overall architecture and describe twonovel algorithms which provide significant improvement over HierarchicalAgglomeration Clustering and AutoClass algorithms and form the basis forthe query generation and search component of the agent. We report on theresults of our experiments comparing these new algorithms with moretraditional clustering algorithms and we show that our algorithms are fastand sacalable.  相似文献   

17.
18.
P. Ferragina  A. Gulli 《Software》2008,38(2):189-225
We propose a (meta‐)search engine, called SnakeT (SNippet Aggregation for Knowledge ExtracTion), which queries more than 18 commodity search engines and offers two complementary views on their returned results. One is the classical flat‐ranked list, the other consists of a hierarchical organization of these results into folders created on‐the‐fly at query time and labeled with intelligible sentences that capture the themes of the results contained in them. Users can browse this hierarchy with various goals: knowledge extraction, query refinement and personalization of search results. In this novel form of personalization, the user is requested to interact with the hierarchy by selecting the folders whose labels (themes) best fit her query needs. SnakeT then personalizes on‐the‐fly the original ranked list by filtering out those results that do not belong to the selected folders. Consequently, this form of personalization is carried out by the users themselves and thus results fully adaptive, privacy preserving, scalable and non‐intrusive for the underlying search engines. We have extensively tested SnakeT and compared it against the best available Web‐snippet clustering engines. SnakeT is efficient and effective, and shows that a mutual reinforcement relationship between ranking and Web‐snippet clustering does exist. In fact, the better the ranking of the underlying search engines, the more relevant the results from which SnakeT distills the hierarchy of labeled folders, and hence the more useful this hierarchy is to the user. Vice versa, the more intelligible the folder hierarchy, the more effective the personalization offered by SnakeT on the ranking of the query results. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

19.
A common task of Web users is querying structured information from Web pages. For realizing this interesting scenario we propose a novel query processor for systematically discovering instances of semantic relations in Web search results and joining these relation instances into complex result tuples with conjunctive queries. Our query processor transforms a structured user query into keyword queries that are submitted to a search engine, forwards search results to a relation extractor, and then combines relations into complex result tuples. The processor automatically learns discriminative and effective keywords for different types of semantic relations. Thereby, our query processor leverages the index of a search engine to query potentially billions of pages. Unfortunately, relation extractors may fail to return a relation for a result tuple. Moreover, user defined data sources may not return at least k complete result tuples. Therefore we propose an adaptive routing model based on information theory for retrieving missing attributes of incomplete result tuples. The model determines the most promising next incomplete tuple and attribute type for returning any-k complete result tuples at any point during the query execution process. We report a thorough experimental evaluation over multiple relation extractors. Our query processor returns complete result tuples while processing only very few Web pages.  相似文献   

20.
王继民  龚笔宏  孟涛 《计算机工程》2006,32(14):25-26,6
用户在使用Web搜索引擎进行信息查询时,可能包含单个或多个主题。该文针对大规模中文搜索引擎系统——北大天网的多任务Web查询,进行了研究和分析。结果显示:多于1/3的用户进行多任务Web查询;超过1/2的多任务会话包含2个不同的主题并进行2~7次查询;多任务会话时间的均值是一般会话时间均值的2倍;天网用户的多任务查询主要有3个主题:计算机,娱乐和教育;近1/4的多任务会话中包含不确定的信息。该文用关联分析的方法发现了用户查询主题之间的一些关系。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号