首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
Deep Web信息通过在网页搜索接口提交查询词获得。通用搜索引擎使用超链接爬取网页,无法索引deep Web数据。为解决此问题,介绍一种基于最优查询的deep Web爬虫,通过从聚类网页中生成最优查询,自动提交查询,最后索引查询结果。实验表明系统能自动、高效地完成多领域deep Web数据爬取。  相似文献   

2.
通过分析Heritirx开源爬虫的组件结构,针对Heritrix开源爬虫项目存在的问题,项目设计了特定的抓取逻辑和定向抓取包含某一特定内容的网页的类,并引入BKDRHash算法进行URL散列,实现了面向特定主题的网页信息搜索,达到了提高搜索数据的效率以及多线程抓取网页的目的。最后对某一特定主题的网页进行分析,并进行网页内容抓取,采用HTMLParser工具将抓取的网页数据源转换成特定的格式,可为面向主题的搜索信息系统以及数据挖掘提供数据源,为下一步研究工作做好准备。  相似文献   

3.
基于遗传算法的定题信息搜索策略   总被引:4,自引:0,他引:4  
定题检索将信息检索限定在特定主题领域,提供主题领域内信息的检索服务。它是新一代搜索引擎的发展方向之一。定题检索的关键技术是主题相关信息的搜索。本文提出了基于遗传算法的定题信息搜索策略,提高链接于内容相似度不高的网页之后的页面被搜索的机会,扩大了相关网页的搜索范围。同时,借助超链Metadata的提示信息预测链接页面的主题相关度,加快了搜索速度。对比搜索试验证明了算法具有较好的性能。  相似文献   

4.
Databases deepen the Web   总被引:2,自引:0,他引:2  
Ghanem  T.M. Aref  W.G. 《Computer》2004,37(1):116-117
The Web has become the preferred medium for many database applications, such as e-commerce and digital libraries. These applications store information in huge databases that users access, query, and update through the Web. Database-driven Web sites have their own interfaces and access forms for creating HTML pages on the fly. Web database technologies define the way that these forms can connect to and retrieve data from database servers. The number of database-driven Web sites is increasing exponentially, and each site is creating pages dynamically-pages that are hard for traditional search engines to reach. Such search engines crawl and index static HTML pages; they do not send queries to Web databases. The information hidden inside Web databases is called the "deep Web" in contrast to the "surface Web" that traditional search engines access easily. We expect deep Web search engines and technologies to improve rapidly and to dramatically affect how the Web is used by providing easy access to many more information resources.  相似文献   

5.
《Computer Networks》1999,31(11-16):1623-1640
The rapid growth of the World-Wide Web poses unprecedented scaling challenges for general-purpose crawlers and search engines. In this paper we describe a new hypertext resource discovery system called a Focused Crawler. The goal of a focused crawler is to selectively seek out pages that are relevant to a pre-defined set of topics. The topics are specified not using keywords, but using exemplary documents. Rather than collecting and indexing all accessible Web documents to be able to answer all possible ad-hoc queries, a focused crawler analyzes its crawl boundary to find the links that are likely to be most relevant for the crawl, and avoids irrelevant regions of the Web. This leads to significant savings in hardware and network resources, and helps keep the crawl more up-to-date.To achieve such goal-directed crawling, we designed two hypertext mining programs that guide our crawler: a classifier that evaluates the relevance of a hypertext document with respect to the focus topics, and a distiller that identifies hypertext nodes that are great access points to many relevant pages within a few links. We report on extensive focused-crawling experiments using several topics at different levels of specificity. Focused crawling acquires relevant pages steadily while standard crawling quickly loses its way, even though they are started from the same root set. Focused crawling is robust against large perturbations in the starting set of URLs. It discovers largely overlapping sets of resources in spite of these perturbations. It is also capable of exploring out and discovering valuable resources that are dozens of links away from the start set, while carefully pruning the millions of pages that may lie within this same radius. Our anecdotes suggest that focused crawling is very effective for building high-quality collections of Web documents on specific topics, using modest desktop hardware.  相似文献   

6.
To avoid returning irrelevant web pages for search engine results, technologies that match user queries to web pages have been widely developed. In this study, web pages for search engine results are classified as low-adjacence (each web page includes all query keywords) or high-adjacence (each web page includes some of the query keywords) sets. To match user queries with web pages using formal concept analysis (FCA), a concept lattice of the low-adjacence set is defined and the non-redundancy association rules defined by Zaki for the concept lattice are extended. OR- and AND-RULEs between non-query and query keywords are proposed and an algorithm and mining method for these rules are proposed for the concept lattice. The time complexity of the algorithm is polynomial. An example illustrates the basic steps of the algorithm. Experimental and real application results demonstrate that the algorithm is effective.  相似文献   

7.
基于Hadoop的分布式并行增量爬虫技术研究   总被引:1,自引:0,他引:1       下载免费PDF全文
面对多媒体社交网络中在线视频的爆炸式增长,使用单机模式下爬虫提取新视频页面的效率低下,为此,提出一种基于Map/Reduce的并行算法,大大提高了爬虫的效率。但是为了进一步改善数据冗余问题,减少过时页面的更新,改进了一种精度感知增量更新算法,利用监控技术监控网页变化情况,分析网页更新模式,增加新鲜度评估和降维处理,使用混合整数二次规划方法为发生更改的网页制定最优的刷新策略。实验证明,相比单机模式下定期频繁的刷新策略,该并行增量方法以原刷新代价的36.7%获得了79%的信息精确度,爬虫效率提高了167倍。  相似文献   

8.
We present a new next generation domain search engine called MedicoPort. MedicoPort is a medical search engine designed for the users with no medical expertise. It is enhanced with the domain knowledge obtained from Unified Medical Language System (UMLS) to increase the effectiveness of the searches. The power of the system is based on the ability to understand the semantics of web pages and the user queries. MedicoPort transforms a keyword search into a conceptual search. Through our system we present a topical web crawling technique and indexing techniques empowered by the semantics information. MedicoPort aims to generate maximum output with semantic value using minimum input from the user. Since MedicoPort is designed to help people seeking information about health on the web, our target users are not medical specialists who can effectively use the special jargon of medicine and access medical databases. Medical experts have the advantage of shrinking the answer set by expressing several terms using medical terminology. MedicoPort provides the same advantage to its users through the automated use of the medical domain knowledge in the background. The results of our experiments indicate that, expanding the queries with domain knowledge, such as using the synonyms and partially or contextually relevant terms from UMLS, increase dramatically the relevance of an answer set produced by MedicoPort and the number of retrieved web pages that are relevant to the user request.  相似文献   

9.
随着网络信息的迅速增长以及深层网络结构的广泛应用,人们对于覆盖率广、检索效率高的搜索引擎提出了愈来愈高的要求。据此,论文提出了一种两阶段的搜索引擎设计方案。在第一阶段利用网络爬虫爬取相关的网络信息,并构成词条语料库,在第二阶段基于TF-IDF算法搜索词条语料库,得到与待查询语句最相近的词条。该引擎利用Flask框架构建本地Web界面,实现简洁明了的界面显示与快速的数据传输,且该框架易于维护。实验结果表明,该搜索引擎采用的爬虫技术所形成的语料库覆盖率广,TF-IDF算法具有计算速度快、匹配精度高的特点。  相似文献   

10.
11.
A neural network-based intelligent metasearch engine   总被引:12,自引:0,他引:12  
Determining the relevancy of web pages to a query term is basic to the working of any search engine. In this paper we present a neural network based algorithm to classify the relevancy of search results on a metasearch engine. The fast learning neural network technology used by us enables the metasearch engine to handle a query term in a reasonably short time and return the search results with high accuracy.  相似文献   

12.
为满足用户精确化和个性化获取信息的需要,通过分析Deep Web信息的特点,提出了一个可搜索不同主题Deep Web 信息的爬虫框架.针对爬虫框架中Deep Web数据库发现和Deep Web爬虫爬行策略两个难题,分别提出了使用通用搜索引擎以加快发现不同主题的Deep Web数据库和采用常用字最大限度下载Deep Web信息的技术.实验结果表明了该框架采用的技术是可行的.  相似文献   

13.
RL_Spider:一种自主垂直搜索引擎网络爬虫   总被引:1,自引:0,他引:1  
在分析相关spider技术的基础上,提出了将强化学习技术应用到垂直搜索引擎的可控网络爬虫方法。该方法通过强化学习技术得到一些控制经验信息,根据这些信息来预测较远的回报,按照某一主题进行搜索,以使累积返回的回报值最大。将得到的网页存储、索引,用户通过搜索引擎的搜索接口,就可以得到最佳的搜索结果。对多个网站进行主题爬虫搜索,实验结果表明,该方法对于网络的查全率和查准率都具有较大的提高。  相似文献   

14.
Hundreds of millions of users each day submit queries to the Web search engine. The user queries are typically very short which makes query understanding a challenging problem. In this paper, we propose a novel approach for query representation and classification. By submitting the query to a web search engine, the query can be represented as a set of terms found on the web pages returned by search engine. In this way, each query can be considered as a point in high-dimensional space and standard classification algorithms such as regression can be applied. However, traditional regression is too flexible in situations with large numbers of highly correlated predictor variables. It may suffer from the overfitting problem. By using search click information, the semantic relationship between queries can be incorporated into the learning system as a regularizer. Specifically, from all the functions which minimize the empirical loss on the labeled queries, we select the one which best preserves the semantic relationship between queries. We present experimental evidence suggesting that the regularized regression algorithm is able to use search click information effectively for query classification.  相似文献   

15.
资源稀缺蒙语语音识别研究   总被引:1,自引:1,他引:0  
张爱英  倪崇嘉 《计算机科学》2017,44(10):318-322
随着语音识别技术的发展,资源稀缺语言的语音识别系统的研究吸引了更广泛的关注。以蒙语为目标识别语言,研究了在资源稀缺的情况下(如仅有10小时的带标注的语音)如何利用其他多语言信息提高识别系统的性能。借助基于多语言深度神经网络的跨语言迁移学习和基于多语言深度Bottleneck神经网络的抽取特征可以获得更具有区分度的声学模型。通过搜索引擎以及网络爬虫的定向抓取获得大量的网页数据,有助于获得文本数据,以增强语言模型的性能。融合多个不同识别结果以进一步提高识别精度。与基线系统相比,多种系统融合的识别绝对错误率减少12%。  相似文献   

16.
海量网页的存在及其量的急速增长使得通用搜索引擎难以为面向主题或领域的查询提供满意结果。本文研究的主题爬虫致力于收集主题相关信息,达到极大降低网页处理量的目的。它通过评价网页的主题相关度,并优先爬取相关度较高的网页。利用一种基于子空间的语义分析技术,并结合贝叶斯以及支持向量机,设计并实现了一个高效的主题爬虫。实验表明,此算法具有很好的准确性和高效性。  相似文献   

17.
网络爬虫是当今网络实时更新和搜索引擎技术的共同产物。文中深入探讨了如何应用网络爬虫技术实现实时更新数据和搜索引擎技术。在对网络爬虫技术进行深入分析的基础上,给出了一种用网络爬虫技术实现局域网内服务器和客户端之间网络通信的解决方案。  相似文献   

18.
The enormous amount of information available on the Internet requires the use of search engines in order to find specific information. As far as web accessibility is concerned, search engines contain two kinds of barriers: on the one hand, the interfaces for making queries and accessing results are not always accessible; on the other hand, web accessibility is not taken into account in information retrieval (IR) processes. Consequently, in addition to interface problems, accessing the items in the list of results tends to be an unsatisfactory experience for people with disabilities. Some groups of users cannot take advantage of the services provided by search engines, as the results are not useful due to their accessibility restrictions. The goal of this paper is to propose the integration of web accessibility measurement into information retrieval processes. Firstly, quantitative accessibility metrics are defined in order to accurately measure the accessibility level of web pages. Secondly, a model to integrate these metrics within IR processes is proposed. Finally, a prototype search engine which re-ranks results according to their accessibility level based on the proposed model is described.  相似文献   

19.
Deep web or hidden web refers to the hidden part of the Web (usually residing in structured databases) that remains unavailable for standard Web crawlers. Obtaining content of the deep web is challenging and has been acknowledged as a significant gap in the coverage of search engines. The paper proposes a novel deep web crawling framework based on reinforcement learning, in which the crawler is regarded as an agent and deep web database as the environment. The agent perceives its current state and selects an action (query) to submit to the environment (the deep web database) according to Q-value. While the existing methods rely on an assumption that all deep web databases possess full-text search interfaces and solely utilize the statistics (TF or DF) of acquired data records to generate the next query, the reinforcement learning framework not only enables crawlers to learn a promising crawling strategy from its own experience, but also allows for utilizing diverse features of query keywords. Experimental results show that the method outperforms the state of art methods in terms of crawling capability and relaxes the assumption of full-text search implied by existing methods.  相似文献   

20.
Web Search is increasingly entity centric; as a large fraction of common queries target specific entities, search results get progressively augmented with semi-structured and multimedia information about those entities. However, search over personal web browsing history still revolves around keyword-search mostly. In this paper, we present a novel approach to answer queries over web browsing logs that takes into account entities appearing in the web pages, user activities, as well as temporal information. Our system, B-hist, aims at providing web users with an effective tool for searching and accessing information they previously looked up on the web by supporting multiple ways of filtering results using clustering and entity-centric search. In the following, we present our system and motivate our User Interface (UI) design choices by detailing the results of a survey on web browsing and history search. In addition, we present an empirical evaluation of our entity-based approach used to cluster web pages.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号