首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 443 毫秒
1.
在原始分类器聚焦爬虫的基础上设计并实现在线增量学习的自适应聚焦爬虫.该聚焦爬虫包括一个基础网页分类器和一个在线增量学习自适应链接分类器.基础页面分类器根据领域知识对抓取到的页面内容主题相关性进行分类.在线增量学习自适应链接分类器能即时根据爬虫爬得网页和网页链接信息作出分类模型调整以更合理方式计算链接的主题相关度.系统中链接排序模块采用TopicalRank主题相关度计算方法分析链接优先抓取顺序.把基于增量学习的自适应聚焦爬虫应用到农业领域,实验结果和分析证明在线增量学习的自适应聚焦爬虫在农业领域爬行性能比仅基于网页相关性和链接重要度的原始分类器聚焦爬虫具有更好的性能.  相似文献   

2.
基于遗传算法的主题爬行技术研究   总被引:3,自引:0,他引:3  
针对目前主题搜索策略的不足,提出了基于遗传箅法的主题爬行策略,提高了链接于内容相似度不高的网页之后的页面被搜索的机会,扩大了相关网页的搜索范围.同时,在网页相关度分析方面,引入了基于本体语义的主题过滤策略.实验结果表明,基于遗传算法的主题爬虫抓取网页中的主题相关网页数量多,在合理选择种子集合时,能够抓取大量的主题相关度高的网页.  相似文献   

3.
针对目前主题网络爬虫搜索策略难以在全局范围内找到最优解,通过对遗传算法的分析与研究,文中设计了一个基于遗传算法的主题爬虫方案.引入了结合文本内容的 PageRank 算法;采用向量空间模型算法计算网页主题相关度;采取网页链接结构与主题相关度来评判网页的重要性;依据网页重要性选择爬行中的遗传因子;设置适应度函数筛选与主题相关的网页.与普通的主题爬虫比较,该策略能够获取大量主题相关度高的网页信息,能够提高获取的网页的重要性,能够满足用户对所需主题网页的检索需求,并在一定程度上解决了上述问题  相似文献   

4.
一种基于HITS的主题敏感爬行方法   总被引:2,自引:0,他引:2  
基于主题的信息采集是信息检索领域内一个新兴且实用的方法,通过将下载页面限定在特定的主题领域,来提高搜索引擎的效率和提供信息的质量。其思想是在爬行过程中按预先定义好的主题有选择地收集相关网页,避免下载主题不相关的网页,其目标是更准确地找到对用户有用的信息。探讨了主题爬虫的一些关键问题,通过改进主题模型、链接分类模型的学习方法及链接分析方法来提高下载网页的主题相关度及质量。在此基础上设计并实现了一个主题爬虫系统,该系统利用主题敏感HITS来计算网页优先级。实验表明效果良好。  相似文献   

5.
主题爬虫能够高效的获取特定主题的网页,是垂直搜索引擎核心技术之一。提出了一个基于领域本体的主题爬虫框架,借助基于领域本体的相关度计算方法预测链接主题的相关度和网页内容与主题的相关度,决定爬虫的下一步爬行路径,以便于尽可能缩小搜索路径。对比实验表明,提出的方法能够有效提高主题爬虫网页抓取的准确率和查全率。  相似文献   

6.
基于主题的信息采集是信息检索领域内一个新兴且实用的方法,通过将下载页面限定在特定的主题领域,来提高搜索引擎的效率和提供信息的质量。其思想是在爬行过程中按预先定义好的主题有选择地收集相关网页,避免下载主题不相关的网页,其目标是更准确地找到对用户有用的信息。探讨了主题爬虫的一些关键问题,通过改进主题模型、链接分类模型的学习方法及链接分析方法来提高下载网页的主题相关度及质量。在此基础上设计并实现了一个主题爬虫系统,该系统利用主题敏感HITS来计算网页优先级。实验表明效果良好。  相似文献   

7.
针对目前主题网络爬虫搜索策略难以在全局范围内找到最优解,通过对遗传算法的分析与研究,文中设计了一个基于遗传算法的主题爬虫方案。引入了结合文本内容的PageRank算法;采用向量空间模型算法计算网页主题相关度;采取网页链接结构与主题相关度来评判网页的重要性;依据网页重要性选择爬行中的遗传因子;设置适应度函数筛选与主题相关的网页。与普通的主题爬虫比较,该策略能够获取大量主题相关度高的网页信息,能够提高获取的网页的重要性,能够满足用户对所需主题网页的检索需求,并在一定程度上解决了上述问题。  相似文献   

8.
基于模拟退火算法的主题爬虫   总被引:1,自引:1,他引:0  
主题爬虫是主题搜索引擎的基础与核心,主题爬行策略的好坏直接影响搜索结果。为了搜索到更多相关的网页,通过利用模拟退火机制选择下一步要访问的链接,使那些蕴含“综合价值”高的链接在搜索初期有机会被选中,同时利用“隧道技术”扩大相关网页的搜索范围。计算链接价值时,综合考虑了链接所在页面内容的价值和链接提示文字的价值,根据它们对链接价值的影响程度不同,分别赋予它们不同的权值。实验证明,该方法对提高网页覆盖率和准确率都有很好的效果。  相似文献   

9.
主题网络爬虫是专业搜索引擎的重要组成部分,设计了一种基于本体的主题爬虫框架,使用领域本体来描述爬行主题,采用关键词提取技术确定网页主题,提出了基于领域本体的网页相关度计算的公式,实践证明基于本体的主题爬虫对网页提取的准确率大大提高。  相似文献   

10.
详细阐述了主题网络爬虫实现的关键技术, 将传统的空间向量模型进行改进形成自适应的空间向量模型, 结合网页内容和链接两个方面进行网页相关度计算, 设计并实现了一个面向主题的网络爬虫系统. 针对主题网络爬虫爬行中出现的页面捕捉不全问题还提出了一种改进的手动与遗传因子相结合的网页搜索策略. 最后给出实验结果, 证明该系统的可行性及优越性.  相似文献   

11.
聚焦爬虫技术研究综述   总被引:51,自引:1,他引:50  
周立柱  林玲 《计算机应用》2005,25(9):1965-1969
因特网的迅速发展对万维网信息的查找与发现提出了巨大的挑战。对于大多用户提出的与主题或领域相关的查询需求,传统的通用搜索引擎往往不能提供令人满意的结果网页。为了克服通用搜索引擎的以上不足,提出了面向主题的聚焦爬虫的研究。至今,聚焦爬虫已成为有关万维网的研究热点之一。文中对这一热点研究进行综述,给出聚焦爬虫(Focused Crawler)的基本概念,概述其工作原理;并根据研究的发展现状,对聚焦爬虫的关键技术(抓取目标描述,网页分析算法和网页搜索策略等)作系统介绍和深入分析。在此基础上,提出聚焦爬虫今后的一些研究方向,包括面向数据分析和挖掘的爬虫技术研究,主题的描述与定义,相关资源的发现,W eb数据清洗,以及搜索空间的扩展等。  相似文献   

12.
The complexity of web information environments and multiple‐topic web pages are negative factors significantly affecting the performance of focused crawling. A highly relevant region in a web page may be obscured because of low overall relevance of that page. Segmenting the web pages into smaller units will significantly improve the performance. Conquering and traversing irrelevant page to reach a relevant one (tunneling) can improve the effectiveness of focused crawling by expanding its reach. This paper presents a heuristic‐based method to enhance focused crawling performance. The method uses a Document Object Model (DOM)‐based page partition algorithm to segment a web page into content blocks with a hierarchical structure and investigates how to take advantage of block‐level evidence to enhance focused crawling by tunneling. Page segmentation can transform an uninteresting multi‐topic web page into several single topic context blocks and some of which may be interesting. Accordingly, focused crawler can pursue the interesting content blocks to retrieve the relevant pages. Experimental results indicate that this approach outperforms Breadth‐First, Best‐First and Link‐context algorithm both in harvest rate, target recall and target length. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

13.
Focused crawlers have as their main goal to crawl Web pages that are relevant to a specific topic or user interest, playing an important role for a great variety of applications. In general, they work by trying to find and crawl all kinds of pages deemed as related to an implicitly declared topic. However, users are often not simply interested in any document about a topic, but instead they may want only documents of a given type or genre on that topic to be retrieved. In this article, we describe an approach to focused crawling that exploits not only content-related information but also genre information present in Web pages to guide the crawling process. This approach has been designed to address situations in which the specific topic of interest can be expressed by specifying two sets of terms, the first describing genre aspects of the desired pages and the second related to the subject or content of these pages, thus requiring no training or any kind of preprocessing. The effectiveness, efficiency and scalability of the proposed approach are demonstrated by a set of experiments involving the crawling of pages related to syllabi of computer science courses, job offers in the computer science field and sale offers of computer equipments. These experiments show that focused crawlers constructed according to our genre-aware approach achieve levels of F1 superior to 88%, requiring the analysis of no more than 65% of the visited pages in order to find 90% of the relevant pages. In addition, we experimentally analyze the impact of term selection on our approach and evaluate a proposed strategy for semi-automatic generation of such terms. This analysis shows that a small set of terms selected by an expert or a set of terms specified by a typical user familiar with the topic is usually enough to produce good results and that such a semi-automatic strategy is very effective in supporting the task of selecting the sets of terms required to guide a crawling process.  相似文献   

14.
主题网络爬虫研究综述   总被引:3,自引:0,他引:3       下载免费PDF全文
网络信息资源呈指数级增长,面对用户越来越个性化的需求,主题网络爬虫应运而生。主题网络爬虫是一种下载特定主题网页的程序。利用在采集页面过程获得的特定信息,主题网络爬虫抓取的页面都是与主题相关的。基于主题网络爬虫的搜索引擎以及基于主题网络爬虫构建领域语料库等应用已经得到广泛运用。首先介绍了主题爬虫的定义、工作原理;然后介绍了近年来国内外关于主题爬虫的研究状况,并比较了各种爬行策略及相关算法的优缺点;最后提出了主题网络爬虫未来的研究方向。关键词:  相似文献   

15.
结合信息增益,提出了一种新的自适应主题爬行策略。利用维基百科的分类树和主题描述文档构建主题向量T,并在爬行过程中不断地进行自动学习,反馈更新主题向量空间中每个概念的权重,完善主题描述。实验结果表明,该方法具有增量爬行的能力,并在信息量总和上明显优于基于the interest ratio的自适应策略;且前者所爬取的网页更接近于与主题相关。  相似文献   

16.
针对传统主题爬虫方法容易陷入局部最优和主题描述不足的问题,提出一种融合本体和改进禁忌搜索策略(On-ITS)的主题爬虫方法。首先利用本体语义相似度计算主题语义向量,基于超级文本标记语言(HTML)网页文本特征位置加权构建网页文本特征向量,然后采用向量空间模型计算网页的主题相关度。在此基础上,计算锚文本主题相关度以及链接指向网页的PR值,综合分析链接优先度。另外,为了避免爬虫陷入局部最优,设计了基于ITS的主题爬虫,优化爬行队列。以暴雨灾害和台风灾害为主题,在相同的实验环境下,基于On-ITS的主题爬虫方法比对比算法的爬准率最多高58%,最少高8%,其他评价指标也很好。基于On-ITS的主题爬虫方法能有效提高获取领域信息的准确性,抓取更多与主题相关的网页。  相似文献   

17.
基于动态主题库的主题爬虫   总被引:1,自引:0,他引:1  
通过对基于不同策略过滤URL的主题爬虫的研究,提出了一种基于动态主题库的主题爬虫.它能够在运行期间实时地更新主题库,提高了对URL过滤的准确度.实验表明,所提的主题爬虫能够在相对较少的时间中,检索尽量少的网络空间,抓取到较多与主题相关的网页.  相似文献   

18.
刘徽  黄宽娜  余建桥 《计算机工程》2012,38(11):284-286
Deep Web包含丰富的、高质量的信息资源,由于没有直接指向Deep Web页面的静态链接,目前大多搜索引擎不能发现这些页 面,只能通过填写表单提交查询获取。为此,提出一种Deep Web爬虫爬行策略。用网页分类器的分层结果指导链接信息提取器提取有前途的链接,将爬行深度限定在3层,从最靠近查询表单中提取链接,且只提取属于这3个层次的链接,从而减少爬虫爬行时间,提高爬虫的准确度,并设计聚焦爬行算法的约束条件。实验结果表明,该策略可以有效地下载Deep Web页面,提高爬行效率。  相似文献   

19.
《Computer Networks》1999,31(11-16):1623-1640
The rapid growth of the World-Wide Web poses unprecedented scaling challenges for general-purpose crawlers and search engines. In this paper we describe a new hypertext resource discovery system called a Focused Crawler. The goal of a focused crawler is to selectively seek out pages that are relevant to a pre-defined set of topics. The topics are specified not using keywords, but using exemplary documents. Rather than collecting and indexing all accessible Web documents to be able to answer all possible ad-hoc queries, a focused crawler analyzes its crawl boundary to find the links that are likely to be most relevant for the crawl, and avoids irrelevant regions of the Web. This leads to significant savings in hardware and network resources, and helps keep the crawl more up-to-date.To achieve such goal-directed crawling, we designed two hypertext mining programs that guide our crawler: a classifier that evaluates the relevance of a hypertext document with respect to the focus topics, and a distiller that identifies hypertext nodes that are great access points to many relevant pages within a few links. We report on extensive focused-crawling experiments using several topics at different levels of specificity. Focused crawling acquires relevant pages steadily while standard crawling quickly loses its way, even though they are started from the same root set. Focused crawling is robust against large perturbations in the starting set of URLs. It discovers largely overlapping sets of resources in spite of these perturbations. It is also capable of exploring out and discovering valuable resources that are dozens of links away from the start set, while carefully pruning the millions of pages that may lie within this same radius. Our anecdotes suggest that focused crawling is very effective for building high-quality collections of Web documents on specific topics, using modest desktop hardware.  相似文献   

20.
A focused crawler is an efficient tool used to traverse the Web to gather documents on a specific topic. It can be used to build domain‐specific Web search portals and online personalized search tools. Focused crawlers can only use information obtained from previously crawled pages to estimate the relevance of a newly seen URL. Therefore, good performance depends on powerful modeling of context as well as the quality of the current observations. To address this challenge, we propose capturing sequential patterns along paths leading to targets based on probabilistic models. We model the process of crawling by a walk along an underlying chain of hidden states, defined by hop distance from target pages, from which the actual topics of the documents are observed. When a new document is seen, prediction amounts to estimating the distance of this document from a target. Within this framework, we propose two probabilistic models for focused crawling, Maximum Entropy Markov Model (MEMM) and Linear‐chain Conditional Random Field (CRF). With MEMM, we exploit multiple overlapping features, such as anchor text, to represent useful context and form a chain of local classifier models. With CRF, a form of undirected graphical models, we focus on obtaining global optimal solutions along the sequences by taking advantage not only of text content, but also of linkage relations. We conclude with an experimental validation and comparison with focused crawling based on Best‐First Search (BFS), Hidden Markov Model (HMM), and Context‐graph Search (CGS).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号