首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 156 毫秒
1.
为了提高网页排序算法的效率,提高搜索引擎的检索质量,提出了融合反馈信息与内容相关度的PageRank改进算法。利用向量空间模型计算网页之间的主题相关性,得到网页的主题相关度权值。通过对网页被点击次数进行统计分析,得到网页点击量的增量权值。将这两个权值结合共同影响网页的PR(PageRank)值分配。通过仿真实验得到运用该算法后的实验结果,与其它算法的实验结果进行了比较,验证了该算法优于其它算法。  相似文献   

2.
基于PageRank的页面排序改进算法   总被引:5,自引:3,他引:2  
首先对PageRank算法进行了一般性介绍,研究了现有的基于链接结构的改进算法.在此基础上,指出PageRank算法给不同网页分配相同的Pagegank值影响了网页的排序质量,提出了一种基于多层分类技术的改进算法HCPR,并对PageRank和HCPR算法进行了相应测试和比较.实验结果表明,HCPR的排序结果比PageRank提高了约15.3%的相关度.  相似文献   

3.
王非  吴庆波  杨沙洲 《计算机工程》2009,35(21):247-249
网页排序技术是搜索引擎的核心技术之一。描述Web2.0社区构建语义搜索的必要性,分析影响网页排序的因素,将搜索引擎的排序算法借鉴到基于Web2.0社区的搜索模块中,以改进的TF/IDF和PageRank算法为基础,在一个Web2.0开源社区开发平台上实现基于语义排序的搜索模块。测试结果表明,该排序算法具有内容定位精确、有效结果靠前的特点。  相似文献   

4.
经典的PageRank算法对所有出链网页采用平均分配链出权值的策略,这种方式会导致网页PR值计算的不准确性。通过对链出网页的重要性进行分析,优化链出权值的分配策略,实验结果表明,改进的PageRank算法使得页面排序的结果更加优化。  相似文献   

5.
基于主题相似度模型的TS-PageRank算法   总被引:1,自引:1,他引:1  
PageRank算法是著名搜索引擎Google的核心算法,但存在主题漂移的问题,致使搜索结果中存在过多与查询主题无关的网页.在分析PageRank算法及其有关改进算法的基础上,提出了基于虚拟文档的主题相似度模型和基于主题相似度模型的TS-PageRank算法框架.只要选择不同的相似度计算模型,就可以得到不同的TS-PageRank算法,形成一个网页排序算法簇.理论分析和数值仿真实验表明,该算法在不需要额外文本信息,也不增加算法时空复杂度的情况下,就能极大地减少主题漂移现象,从而提高查询效率和质量.  相似文献   

6.
基于Lucene 网页排序算法的改进   总被引:3,自引:1,他引:2  
在分析现有的词频位置加权排序法、Direct Hit算法、PageRank算法和Lucene的网页排序算法后,将这三种著名的算法思想运用到Lucene的网页排序算法中,并设计了一个基于Lucene的糖业专业搜索引擎,重点介绍该搜索引擎的检索功能。最后,通过在所设计好的糖业专业搜索引擎进行实验,验证改进后Lucene的网页排序算法,实验结果表明改进后的排序算法能够提高检索结果的质量,能够更准确地将结果信息反馈给用户。  相似文献   

7.
赵亚娟  闫娜 《数字社区&智能家居》2014,(27):6365-6366,6368
互联网信息的海量性一方面带给人们无穷的信息,另一方面也给人们的信息获取工作带来一定的困难。因而能够快捷高效地提供高质量的查询结果的互联网搜索引擎将受到大众的青睐。在网页搜索中,PageRank和hits是重要的基于链接的排序算法,在百度、谷歌等商业引擎中使用广泛。但在PageRank算法中也极存在一些问题,导致其容易受垃圾网页的攻击,不利于人们高质量地从互联网上获取信息,因此,有必要对PageRank算法进行改进,从而改善网页质量,提高信息获取的高效准确性。该文基于这样的背景对PageRank算法改进进行分析,以更好地实现信息的有效流通,让高质量的网页得到更多关注。  相似文献   

8.
通过对网页用户角色的分析发现,传统的基于PageRank算法的搜索引擎结果排序欠佳,是因为其没有兼顾所有角色对网页重要性的评价。为此,提出一种结合了所有角色评价的综合网页排序算法——ComPageRank(CPR)算法和一种基于点击量分析的Click- throughRank(CTR)算法。实验结果表明,相比PageRank为代表的网页排序算法,CPR算法更全面、合理。  相似文献   

9.
Web服务检索的困难阻碍了其应用和发展的速度。在实现了一个Web服务搜索引擎WSSE后,服务的排序成为需要解决的问题。通过Web服务爬虫的爬行特点分析Web服务的分布结构和相互关系,借鉴著名的网页排序算法PageRank及其改进算法的研究成果,创新地提出WSRank算法。迭代计算各服务的排序值,按值进行非递增排序。实验表明,本算法能提高Web服务检索的准确性。  相似文献   

10.
PageRank算法的分析及其改进   总被引:2,自引:0,他引:2       下载免费PDF全文
王德广  周志刚  梁旭 《计算机工程》2010,36(22):291-292
在分析PageRank算法存在偏重旧网页、主题漂移、网页权值均分、忽视用户浏览兴趣现象的基础上,对其进行改进,考虑网页修改日期、网页文本信息、网站权威度、用户兴趣度等重要因素,重新计算网页PR值。实验结果表明,改进算法可提高搜索引擎对网页排序的准确度,以及用户对检索结果的满意度。  相似文献   

11.
In this article we first explain the knowledge extraction (KE) process from the World Wide Web (WWW) using search engines. Then we explore the PageRank algorithm of Google search engine (a well-known link-based search engine) with its hidden Markov analysis. We also explore one of the problems of link-based ranking algorithms called hanging pages or dangling pages (pages without any forward links). The presence of these pages affects the ranking of Web pages. Some of the hanging pages may contain important information that cannot be neglected by the search engine during ranking. We propose methodologies to handle the hanging pages and compare the methodologies. We also introduce the TrustRank algorithm (an algorithm to handle the spamming problems in link-based search engines) and include it in our proposed methods so that our methods can combat Web spam. We implemented the PageRank algorithm and TrustRank algorithm and modified those algorithms to implement our proposed methodologies.  相似文献   

12.
Search engines retrieve and rank Web pages which are not only relevant to a query but also important or popular for the users. This popularity has been studied by analysis of the links between Web resources. Link-based page ranking models such as PageRank and HITS assign a global weight to each page regardless of its location. This popularity measurement has shown successful on general search engines. However unlike general search engines, location-based search engines should retrieve and rank higher the pages which are more popular locally. The best results for a location-based query are those which are not only relevant to the topic but also popular with or cited by local users. Current ranking models are often less effective for these queries since they are unable to estimate the local popularity. We offer a model for calculating the local popularity of Web resources using back link locations. Our model automatically assigns correct locations to the links and content and uses them to calculate new geo-rank scores for each page. The experiments show more accurate geo-ranking of search engine results when this model is used for processing location-based queries.  相似文献   

13.
Most Web pages contain location information, which are usually neglected by traditional search engines. Queries combining location and textual terms are called as spatial textual Web queries. Based on the fact that traditional search engines pay little attention in the location information in Web pages, in this paper we study a framework to utilize location information for Web search. The proposed framework consists of an offline stage to extract focused locations for crawled Web pages, as well as an online ranking stage to perform location-aware ranking for search results. The focused locations of a Web page refer to the most appropriate locations associated with the Web page. In the offline stage, we extract the focused locations and keywords from Web pages and map each keyword with specific focused locations, which forms a set of <keyword, location> pairs. In the second online query processing stage, we extract keywords from the query, and computer the ranking scores based on location relevance and the location-constrained scores for each querying keyword. The experiments on various real datasets crawled from nj.gov, BBC and New York Time show that the performance of our algorithm on focused location extraction is superior to previous methods and the proposed ranking algorithm has the best performance w.r.t different spatial textual queries.  相似文献   

14.
针对传统PageRank算法存在的平分链接权重和忽略用户兴趣等问题,提出一种基于学习自动机和用户兴趣的页面排序算法LUPR。在所提方法中,给每个网页分配学习自动机,其功能是确定网页之间超链接的权重。通过对用户行为进一步分析,以用户的浏览行为衡量用户对网页的兴趣度,从而获得兴趣度因子。该算法根据网页间的超链接和用户对网页的兴趣度衡量网页权重计算每个网页的排名。最后的仿真实验表明,较传统的PageRank算法和WPR算法,改进后的LUPR算法在一定程度上提高了信息检索的准确度和用户满意度。  相似文献   

15.
Search engines result pages (SERPs) for a specific query are constructed according to several mechanisms. One of them consists in ranking Web pages regarding their importance, regardless of their semantic. Indeed, relevance to a query is not enough to provide high quality results, and popularity is used to arbitrate between equally relevant Web pages. The most well-known algorithm that ranks Web pages according to their popularity is the PageRank.The term Webspam was coined to denotes Web pages created with the only purpose of fooling ranking algorithms such as the PageRank. Indeed, the goal of Webspam is to promote a target page by increasing its rank. It is an important issue for Web search engines to spot and discard Webspam to provide their users with a nonbiased list of results. Webspam techniques are evolving constantly to remain efficient but most of the time they still consist in creating a specific linking architecture around the target page to increase its rank.In this paper we propose to study the effects of node aggregation on the well-known ranking algorithm of Google (the PageRank) in the presence of Webspam. Our node aggregation methods have the purpose to construct clusters of nodes that are considered as a sole node in the PageRank computation. Since the Web graph is way to big to apply classic clustering techniques, we present four lightweight aggregation techniques suitable for its size. Experimental results on the WEBSPAM-UK2007 dataset show the interest of the approach, which is moreover confirmed by statistical evidence.  相似文献   

16.
网络数据的飞速增长为搜索引擎带来了巨大的存储和网络服务压力,大量冗余、低质量乃至垃圾数据造成了搜索引擎存储与运算能力的巨大浪费,在这种情况下,如何建立适合万维网实际应用环境的网页数据质量评估体系与评估算法成为了信息检索领域的重要研究课题。在前人工作的基础上,通过网络用户及网页设计人员的参与,文章提出了包括权威知名度、内容、时效性和网页外观呈现四个维度十三个因素的网页质量评价体系;标注数据显示我们的网页质量评价体系具有较强的可操作性,标注结果比较一致;文章最后使用Ordinal Logistic Regression 模型对评价体系的各个维度的重要性进行了分析并得出了一些启发性的结论 互联网网页内容和实效性能否满足用户需求是决定其质量的重要因素。  相似文献   

17.
改进的PageRank在Web信息搜集中的应用   总被引:7,自引:0,他引:7  
PageRank是一种用于网页排序的算法,它利用网页间的相互引用关系评价网页的重要性·但由于它对每条出链赋予相同的权值,忽略了网页与主题的相关性,容易造成主题漂移现象·在分析了几种PageRank算法基础上,提出了一种新的基于主题分块的PageRank算法·该算法按照网页结构对网页进行分块,依照各块与主题的相关性大小对块中的链接传递不同的PageRank值,并能根据已访问的链接对块进行相关性反馈·实验表明,所提出的算法能较好地改进搜索结果的精确度·  相似文献   

18.
加速评估算法:一种提高Web结构挖掘质量的新方法   总被引:13,自引:1,他引:13  
利用Web结构挖掘可以找到Web上的高质量网页,它大大地提高了搜索引擎的检索精度,目前的Web结构挖掘算法是通过统计链接到每个页面的超链接的数量和源结点的质量对页面进行评估,基于统计链接数目的算法存在一个严重缺陷:页面评价两极分化,一些传统的高质量页面经常出现在Web检索结果的前面,而Web上新加入的高质量页面很难被用户找到,提出了加速评估算法以克服现有Web超链接分析中的不足,并通过搜索引擎平台对算法进行了测试和验证。  相似文献   

19.
Web spam attempts to influence search engine ranking algorithm in order to boost the rankings of specific web pages in search engine results. Cloaking is a widely adopted technique of concealing web spam by replying different content to search engines’ crawlers from that displayed in a web browser. Previous work on cloaking detection is mainly based on the differences in terms and/or links between multiple copies of a URL retrieved from web browser and search engine crawler perspectives. This work presents three methods of using difference in tags to determine whether a URL is cloaked. Since the tags of a web page generally do not change as frequently and significantly as the terms and links of the web page, tag-based cloaking detection methods can work more effectively than the term- or link-based methods. The proposed methods are tested with a dataset of URLs covering short-, medium- and long-term users’ interest. Experimental results indicate that the tag-based methods outperform term- or link-based methods in both precision and recall. Moreover, a Weka J4.8 classifier using a combination of term and tag features yields an accuracy rate of 90.48%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号