首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 125 毫秒
1.
基于Heritrix的商品信息搜索的网络爬虫系统的设计   总被引:1,自引:0,他引:1  
探讨以开源软件Heritrix体系构建的获取商品信息爬虫系统,针对Heritrix开源爬虫项目存在的问题和商品采集的特点,项目设计了定向抓取包含某一特定内容的网页的类,从而改进Heritrix,并引入ELFHash算法进行URL散列中,以提高抓取效率,为面向商品的搜索系统以及数据挖掘提供可靠的数据源。  相似文献   

2.
网页标题分析对主题爬虫的改进   总被引:3,自引:1,他引:2  
随着网络信息资源的爆发式增长,现有的搜索引擎已经无法满足迅速获取准确信息的需要,为搜索引擎引入搜索内容更为精确的主题爬虫显得十分迫切.然而目前的主题爬虫所采用的两种基本抓取网页的方式效率比较低下.提出了一种通过网页标题分析对主题爬虫的改进方案,比较了引人标题分析前后的结果,论证了设计的可行性与可操作性,优化了主题爬虫对同类型特定信息的抓取.  相似文献   

3.
在研究了现存的主题爬虫的基础上,提出了一种基于统计模型的主题爬虫,它对抓取过程中可获得的信息进行分析,并运用统计模型计算的结果过滤URL,有效地解决了偏好特定主题的用户检索和Web信息的索引等相关问题.实验结果表明,与基于链接和网页内容分析的主题爬虫相比,该主题爬虫能够在检索较少的网页时,抓取到较多的与主题相关的网页,提高了抓取精度.  相似文献   

4.
网络爬虫采集互联网信息并提供搜索服务。该设计基于Lucene.NET平台开发网络爬虫,可以对特定的网页进行抓取和分析,提取网页中有用信息,并对抓取的数据进行索引,存储到服务器硬盘,同时过滤掉无用信息。系统界面友好,准确高效。  相似文献   

5.
聚焦爬虫技术研究综述   总被引:51,自引:1,他引:50  
周立柱  林玲 《计算机应用》2005,25(9):1965-1969
因特网的迅速发展对万维网信息的查找与发现提出了巨大的挑战。对于大多用户提出的与主题或领域相关的查询需求,传统的通用搜索引擎往往不能提供令人满意的结果网页。为了克服通用搜索引擎的以上不足,提出了面向主题的聚焦爬虫的研究。至今,聚焦爬虫已成为有关万维网的研究热点之一。文中对这一热点研究进行综述,给出聚焦爬虫(Focused Crawler)的基本概念,概述其工作原理;并根据研究的发展现状,对聚焦爬虫的关键技术(抓取目标描述,网页分析算法和网页搜索策略等)作系统介绍和深入分析。在此基础上,提出聚焦爬虫今后的一些研究方向,包括面向数据分析和挖掘的爬虫技术研究,主题的描述与定义,相关资源的发现,W eb数据清洗,以及搜索空间的扩展等。  相似文献   

6.
基于PageRank与Bagging的主题爬虫研究   总被引:3,自引:0,他引:3  
为克服主题爬虫主题漂移现象,提高搜索引擎的查准率和查全率,提出了一个基于PageRank算法与Bagging算法的主题爬虫设计方法.将主题爬虫系统分为爬虫爬行模块和主题相关性分析模块.利用一种改进的PageRank算法改善了爬虫的搜索策略,进行网页遍历与抓取.用向量空间模型表示网页主题,使用Bagging算法构造网页主题分类器进行主题相关性分析,过滤与主题无关网页.实验结果表明,该方法在网页抓取的性能上和主题网页的查准率上都取得较好的效果.  相似文献   

7.
主题爬虫能够高效的获取特定主题的网页,是垂直搜索引擎核心技术之一。提出了一个基于领域本体的主题爬虫框架,借助基于领域本体的相关度计算方法预测链接主题的相关度和网页内容与主题的相关度,决定爬虫的下一步爬行路径,以便于尽可能缩小搜索路径。对比实验表明,提出的方法能够有效提高主题爬虫网页抓取的准确率和查全率。  相似文献   

8.
如今,互联网集成的与暴雨灾害相关的信息多种多样,然而人工搜索网页信息的效率不高,因此网络主题爬虫显得十分重要。在通用网络爬虫的基础上,为提高主题相关度的计算精度并预防主题漂移,通过对链接锚文本主题相关度、链接所在网页的主题相关度、链接指向网页PR值和该网页主题相关度的综合计算,提出了基于网页内容和链接结构相结合的超链接综合优先度评估方法。同时,针对搜索过程易陷入局部最优的不足,首次设计了结合爬虫记忆历史主机信息和模拟退火的网络主题爬虫算法。以暴雨灾害为主题进行爬虫实验的结果表明,在爬取相同网页数的情况下,相比于广度优先搜索策略(Breadth First Search,BFS)和最佳优先搜索策略(Optimal Priority Search,OPS),所提出的算法能抓取到更多与主题相关的网页,爬虫算法的准确率得到明显提升。  相似文献   

9.
随着Internet的快速发展,越来越多的用户提出与主题或者领域相关的查询需求,而传统通用搜索引擎已经无法满足这一需求。为了克服传统通用搜索引擎的不足,研究者提出面向主题的爬虫。首先给出主题网络爬虫的定义,接着提出主题爬虫的三个关键技术:抓取目标、网页搜索策略和网页主题相关性算法,最后给出主题爬虫在今后的一些研究方向。  相似文献   

10.
主题网络爬虫研究综述   总被引:3,自引:0,他引:3       下载免费PDF全文
网络信息资源呈指数级增长,面对用户越来越个性化的需求,主题网络爬虫应运而生。主题网络爬虫是一种下载特定主题网页的程序。利用在采集页面过程获得的特定信息,主题网络爬虫抓取的页面都是与主题相关的。基于主题网络爬虫的搜索引擎以及基于主题网络爬虫构建领域语料库等应用已经得到广泛运用。首先介绍了主题爬虫的定义、工作原理;然后介绍了近年来国内外关于主题爬虫的研究状况,并比较了各种爬行策略及相关算法的优缺点;最后提出了主题网络爬虫未来的研究方向。关键词:  相似文献   

11.
Web is flooded with data. While the crawler is responsible for accessing these web pages and giving it to the indexer for making them available to the users of search engine, the rate at which these web pages change has created the necessity for the crawler to employ refresh strategies to give updated/modified content to the search engine users. Furthermore, Deep web is that part of the web that has alarmingly abundant amounts of quality data (when compared to normal/surface web) but not technically accessible to a search engine’s crawler. The existing deep web crawl methods helps to access the deep web data from the result pages that are generated by filling forms with a set of queries and accessing the web databases through them. However, these methods suffer from not being able to maintain the freshness of the local databases. Both the surface web and the deep web needs an incremental crawl associated with the normal crawl architecture to overcome this problem. Crawling the deep web requires the selection of an appropriate set of queries so that they can cover almost all the records in the data source and in addition the overlapping of records should be low so that network utilization is reduced. An incremental crawl adds to an increase in the network utilization with every increment. Therefore, a reduced query set as described earlier should be used in order to minimize the network utilization. Our contributions in this work are the design of a probabilistic approach based incremental crawler to handle the dynamic changes of the surface web pages, adapting the above mentioned method with a modification to handle the dynamic changes in the deep web databases, a new evaluation measure called the ‘Crawl-hit rate’ to evaluate the efficiency of the incremental crawler in terms of the number of times the crawl is actually necessary in the predicted time and a semantic weighted set covering algorithm for reducing the queries so that the network cost is reduced for every increment of the crawl without any compromise in the number of records retrieved. The evaluation of incremental crawler shows a good improvement in the freshness of the databases and a good Crawl-hit rate (83 % for web pages and 81 % for deep web databases) with a lesser over head when compared to the baseline.  相似文献   

12.
主要介绍了垂直搜索引擎和网络爬虫的基本概念,以及Heritrix系统的体系结构,分析了Heritrix工作流程,并通过扩展Heritrix实现了对网易手机频道信息的多线程抓取,为建立面向手机信息的垂直搜索引擎提供了信息源。  相似文献   

13.
This work addresses issues related to the design and implementation of focused crawlers. Several variants of state-of-the-art crawlers relying on web page content and link information for estimating the relevance of web pages to a given topic are proposed. Particular emphasis is given to crawlers capable of learning not only the content of relevant pages (as classic crawlers do) but also paths leading to relevant pages. A novel learning crawler inspired by a previously proposed Hidden Markov Model (HMM) crawler is described as well. The crawlers have been implemented using the same baseline implementation (only the priority assignment function differs in each crawler) providing an unbiased evaluation framework for a comparative analysis of their performance. All crawlers achieve their maximum performance when a combination of web page content and (link) anchor text is used for assigning download priorities to web pages. Furthermore, the new HMM crawler improved the performance of the original HMM crawler and also outperforms classic focused crawlers in searching for specialized topics.  相似文献   

14.
RL_Spider:一种自主垂直搜索引擎网络爬虫   总被引:1,自引:0,他引:1  
在分析相关spider技术的基础上,提出了将强化学习技术应用到垂直搜索引擎的可控网络爬虫方法。该方法通过强化学习技术得到一些控制经验信息,根据这些信息来预测较远的回报,按照某一主题进行搜索,以使累积返回的回报值最大。将得到的网页存储、索引,用户通过搜索引擎的搜索接口,就可以得到最佳的搜索结果。对多个网站进行主题爬虫搜索,实验结果表明,该方法对于网络的查全率和查准率都具有较大的提高。  相似文献   

15.
基于概率模型的主题爬虫的研究和实现   总被引:1,自引:1,他引:0       下载免费PDF全文
在现有多种主题爬虫的基础上,提出了一种基于概率模型的主题爬虫。它综合抓取过程中获得的多方面的特征信息来进行分析,并运用概率模型计算每个URL的优先值,从而对URL进行过滤和排序。基于概率模型的主题爬虫解决了大多数爬虫抓取策略单一这个缺陷,它与以往主题爬虫的不同之处是除了使用主题相关度评价指标外,还使用了历史评价指标和网页质量评价指标,较好地解决了"主题漂移"和"隧道穿越"问题,同时保证了资源的质量。最后通过多组实验验证了其在主题网页召回率和平均主题相关度上的优越性。  相似文献   

16.
Web crawlers are complex applications that explore the Web for different purposes. Web crawlers can be configured to crawl online social networks (OSNs) to obtain relevant data about their global structure. Before a web crawler can be launched to explore the Web, a large amount of settings have to be configured. These settings define the crawler's behavior and they have a big impact on the collected data. Both the amount of collected data and the quality of the information that it contains are affected by the crawler settings and, therefore, by properly configuring these web crawler settings we can target specific goals to achieve with our crawl. In this paper, we review the configuration choices that an attacker who wants to obtain information from an OSN by crawling it has to make to conduct his attack. We analyze different scheduler algorithms for web crawlers and evaluate their performance in terms of how useful they are to pursue a set of different adversary goals.  相似文献   

17.
Web crawlers are essential to many Web applications, such as Web search engines, Web archives, and Web directories, which maintain Web pages in their local repositories. In this paper, we study the problem of crawl scheduling that biases crawl ordering toward important pages. We propose a set of crawling algorithms for effective and efficient crawl ordering by prioritizing important pages with the well-known PageRank as the importance metric. In order to score URLs, the proposed algorithms utilize various features, including partial link structure, inter-host links, page titles, and topic relevance. We conduct a large-scale experiment using publicly available data sets to examine the effect of each feature on crawl ordering and evaluate the performance of many algorithms. The experimental results verify the efficacy of our schemes. In particular, compared with the representative RankMass crawler, the FPR-title-host algorithm reduces computational overhead by a factor as great as three in running time while improving effectiveness by 5?% in cumulative PageRank.  相似文献   

18.
一种新的面向主题的爬行算法*   总被引:1,自引:0,他引:1  
虽然通用网络爬行器已经给人们提供了极大的便利,但由于它的综合性不具备面向专业的特点,在准确性和速度等方面存在不足;面向主题的爬行器能弥补这些不足。主要研究面向主题网络爬行器两个方面的问题,即如何充分地定义主题和有效地排序爬行器待下载链接队列中的链接,使得只需访问很少的不相关页面就能够得到很多相关的页面链接。结合网页的半结构化信息特征,提出了一种新的基于内容的爬行策略,实验结果显示是一种寻找主题相关页面很有效的方法。  相似文献   

19.
The Web comprises of voluminous rich learning content.The volume of ever growing learning resources however leads to the problem of information overload.A large number of irrelevant search results generated from search engines based on keyword matching techniques further augment the problem.A learner in such a scenario needs semantically matched learning resources as the search results.Keeping in view the volume of content and significance of semantic knowledge,our paper proposes a multi-threaded semantic focused crawler(SFC) specially designed and implemented to crawl on the WWW for educational learning content.The proposed SFC utilizes domain ontology to expand a topic term and a set of seed URLs to initiate the crawl.The results obtained by multiple iterations of the crawl on various topics are shown and compared with the results obtained by executing an open source crawler on the similar dataset.The results are evaluated using Semantic Similarity,a vector space model based metric,and the harvest ratio.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号