首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 68 毫秒
1.
Internet上的化学数据库是宝贵的化学信息资源,如何有效地利用这些数据是化学深层网所要解决的问题。本文总结了化学深层网的特点,基于XML技术实现从数据库检索返回的半结构化HTML页面中提取数据的目标,使之成为可供程序直接调用做进一步计算的数据。在数据提取过程中,先采用JTidy规范化HTML,得到格式上完整、内容无误的XHTML文档,利用包含着XPath路径语言的XSLT数据转换模板实现数据转换和提取。其中XPath表达式的优劣决定了XSLT数据转换模板能否长久有效地提取化学数据,文中着重介绍了如何编辑健壮的XPath表达式,强调了XPath表达式应利用内容和属性特征实现对源树中数据的定位,并尽可能地降低表达式之间的耦合度,前瞻性地预测化学站点可能出现的变化并在XSLT数据转换模板中采取相应的措施以提高表达式的长期有效性。为创建化学深层网数据提取的XSLT数据提取模板提供方法指导。  相似文献   

2.
The amount of information contained in databases available on the Web has grown explosively in the last years. This information, known as the Deep Web, is heterogeneous and dynamically generated by querying these back-end (relational) databases through Web Query Interfaces (WQIs) that are a special type of HTML forms. The problem of accessing to the information of Deep Web is a great challenge because the information existing usually is not indexed by general-purpose search engines. Therefore, it is necessary to create efficient mechanisms to access, extract and integrate information contained in the Deep Web. Since WQIs are the only means to access to the Deep Web, the automatic identification of WQIs plays an important role. It facilitates traditional search engines to increase the coverage and the access to interesting information not available on the indexable Web. The accurate identification of Deep Web data sources are key issues in the information retrieval process. In this paper we propose a new strategy for automatic discovery of WQIs. This novel proposal makes an adequate selection of HTML elements extracted from HTML forms, which are used in a set of heuristic rules that help to identify WQIs. The proposed strategy uses machine learning algorithms for classification of searchable (WQIs) and non-searchable (non-WQI) HTML forms using a prototypes selection algorithm that allows to remove irrelevant or redundant data in the training set. The internal content of Web Query Interfaces was analyzed with the objective of identifying only those HTML elements that are frequently appearing provide relevant information for the WQIs identification. For testing, we use three groups of datasets, two available at the UIUC repository and a new dataset that we created using a generic crawler supported by human experts that includes advanced and simple query interfaces. The experimental results show that the proposed strategy outperforms others previously reported works.  相似文献   

3.
Towards Deeper Understanding of the Search Interfaces of the Deep Web   总被引:2,自引:0,他引:2  
Many databases have become Web-accessible through form-based search interfaces (i.e., HTML forms) that allow users to specify complex and precise queries to access the underlying databases. In general, such a Web search interface can be considered as containing an interface schema with multiple attributes and rich semantic/meta-information; however, the schema is not formally defined in HTML. Many Web applications, such as Web database integration and deep Web crawling, require the construction of the schemas. In this paper, we first propose a schema model for representing complex search interfaces, and then present a layout-expression based approach to automatically extract the logical attributes from search interfaces. We also rephrase the identification of different types of semantic information as a classification problem, and design several Bayesian classifiers to help derive semantic information from extracted attributes. A system, WISE-iExtractor, has been implemented to automatically construct the schema from any Web search interfaces. Our experimental results on real search interfaces indicate that this system is highly effective.  相似文献   

4.
基于网页上下文的Deep Web数据库分类   总被引:6,自引:0,他引:6  
马军  宋玲  韩晓晖  闫泼 《软件学报》2008,19(2):267-274
讨论了提高Deep Web数据库分类准确性的若干新技术,其中包括利用HTML网页的内容文本作为理解数据库内容的上下文和把数据库表的属性标记词归一的过程.其中对网页中的内容文本的发现算法是基于对网页文本块的多种统计特征.而对数据库属性标记词的归一过程是把同义标记词用代表词进行替代的过程.给出了采用分层模糊集合对给定学习实例所发现的领域和语言知识进行表示和基于这些知识对标记词归一化算法.基于上述预处理,给出了计算Deep Web数据库的K-NN(k nearest neighbors)分类算法,其中对数据库之间语义距离计算综合了数据库表之间和含有数据库表的网页的内容文本之间的语义距离.分类实验给出算法对未预处理的网页和经过预处理后的网页在数据库分类精度、查全率和综合F1等测度上的分类结果比较.  相似文献   

5.
随着在线数据库的迅速增长,可以访问的数据库资源大大增多,但它们的信息传统搜索引擎无法获得,它隐藏在网站背后,成为人们快速有效获取信息的障碍。为了获得Deepweb中大量有价值的隐藏信息,需要整合各在线异构数据源,以便在同一领域内比较某一事物的大量相关信息。目前,越来越多的人采取网上买书的消费方式,针对这个消费热点问题,设计了一个书籍搜索领域的Deep Web数据集成系统,提供一个集成的查询接口,使得用户可以方便地进行查找和比对。  相似文献   

6.
Templates are pieces of HTML code common to a set of web pages usually adopted by content providers to enhance the uniformity of layout and navigation of theirs Web sites. They are usually generated using authoring/publishing tools or by programs that build HTML pages to publish content from a database. In spite of their usefulness, the content of templates can negatively affect the quality of results produced by systems that automatically process information available in web sites, such as search engines, clustering and automatic categorization programs. Further, the information available in templates is redundant and thus processing and storing such information just once for a set of pages may save computational resources. In this paper, we present and evaluate methods for detecting templates considering a scenario where multiple templates can be found in a collection of Web pages. Most of previous work have studied template detection algorithms in a scenario where the collection has just a single template. The scenario with multiple templates is more realistic and, as it is discussed here, it raises important questions that may require extensions and adjustments in previously proposed template detection algorithms. We show how to apply and evaluate two template detection algorithms in this scenario, creating solutions for detecting multiple templates. The methods studied partitions the input collection into clusters that contain common HTML paths and share a high number of HTML nodes and then apply a single-template detection procedure over each cluster. We also propose a new algorithm for single template detection based on a restricted form of bottom-up tree-mapping that requires only small set of pages to correctly identify a template and which has a worst-case linear complexity. Our experimental results over a representative set of Web pages show that our approach is efficient and scalable while obtaining accurate results.  相似文献   

7.
Deep Web信息通过在网页搜索接口提交查询词获得。通用搜索引擎使用超链接爬取网页,无法索引deep Web数据。为解决此问题,介绍一种基于最优查询的deep Web爬虫,通过从聚类网页中生成最优查询,自动提交查询,最后索引查询结果。实验表明系统能自动、高效地完成多领域deep Web数据爬取。  相似文献   

8.
Since the Web encourages hypertext and hypermedia document authoring (e.g., HTML or XML), Web authors tend to create documents that are composed of multiple pages connected with hyperlinks. A Web document may be authored in multiple ways, such as: (1) all information in one physical page, or (2) a main page and the related information in separate linked pages. Existing Web search engines, however, return only physical pages containing keywords. We introduce the concept of information unit, which can be viewed as a logical Web document consisting of multiple physical pages as one atomic retrieval unit. We present an algorithm to efficiently retrieve information units. Our algorithm can perform progressive query processing. These functionalities are essential for information retrieval on the Web and large XML databases. We also present experimental results on synthetic graphs and real Web data  相似文献   

9.
随着社会经济的快速发展以及人民生活水平、消费支付能力的提高,各种娱乐场所消费市场迅猛发展。由于娱乐场所人员众多繁杂,因此有必要强化娱乐场所的安全管理措施。公安机关通过对各娱乐场所实施隐蔽式远程图像监控管理,变被动式接警处理为主动式监管,不仅可以大大缓解警力不足的问题,对于吸毒、聚众闹事等违法犯罪行为起到一定的震慑作用,而且必要情况下的图像资料的录像保存还可以作为公安机关对于犯罪认定和处理的有效依据。  相似文献   

10.
在Internet上,新技术的发展日新月异,层出不穷.为适应21世纪社会经济和科技发展对高素质创造型人才的需要,当今的现代教育技术提供了教学模式改革所必须的技术支持手段,把ADO与ASP结合起来访问Web数据库是一种理想的Web数据库访问的解决方案.通过这项技术.我们可以建立提供数据库信息的Web页内容,在Web页面中执行SQL命令,对数据库进行查询、插入、更新、删除等操作.ADO可以连接多种支持ODBC的数据库.这种新的技术手段就是网络教学,Web技术与数据库技术的结合--Web数据库技术,正在深刻地改变着网络应用的面貌.  相似文献   

11.
Tirri  H. 《Computer》2003,36(1):115-116
Search - originally a simple keyword-lookup functionality for files - has become a fundamental Internet service. Yet its failings are well known. Easy Web access to billions of pages of information has a downside. The pages are titled mostly according to their authors' whims and use subtly different terminology that can fool a simple keyword search - sometimes intentionally, sometimes not. In addition, even the best general purpose search engines do not reach the "invisible Web" of back-end databases. Subject-specific search sites can help this situation but take time to maintain, rarely include a sophisticated interface, and seldom provide good coverage even in their topic area. Citeseer (http://clteseer.nj.nec.com/cs), a reference source for computer science research papers, is one successful exception. It is argued that the next generation of Internet search facilities must support more complex vehicles for interaction than keywords.  相似文献   

12.
基于标记树对象抽取技术的Hidden Web获取研究   总被引:6,自引:0,他引:6  
目前标准的搜索引擎能够检索的仅仅是WorldWideWeb提供的小部分称为可索引的Web信息。大量的HiddenWeb信息(估计容量是可索引Web的500倍)对这些搜索引擎是不可见的。这些信息隐藏在Web页面的搜索表单后面,保存在大型的动态数据库中。该文提出了一套检索HiddenWeb信息的方法,给出了系统的框架结构,并详细讨论了实现的关键技术。系统采用新的基于标记树的对象抽取(Tag-Tree-basedObjectExtraction)方法自动地从Web页面中抽取HiddenWeb信息,然后在此基础上给出了结构化的HiddenWeb信息查询算法。文章最后对实验结果进行了讨论。  相似文献   

13.
The Web is a hypertext body of approximately 300 million pages that continues to grow at roughly a million pages per day. Page variation is more prodigious than the data's raw scale: taken as a whole, the set of Web pages lacks a unifying structure and shows far more authoring style and content variation than that seen in traditional text document collections. This level of complexity makes an “off-the-shelf” database management and information retrieval solution impossible. To date, index based search engines for the Web have been the primary tool by which users search for information. Such engines can build giant indices that let you quickly retrieve the set of all Web pages containing a given word or string. Experienced users can make effective use of such engines for tasks that can be solved by searching for tightly constrained key words and phrases. These search engines are, however, unsuited for a wide range of equally important tasks. In particular, a topic of any breadth will typically contain several thousand or million relevant Web pages. How then, from this sea of pages, should a search engine select the correct ones-those of most value to the user? Clever is a search engine that analyzes hyperlinks to uncover two types of pages: authorities, which provide the best source of information on a given topic; and hubs, which provide collections of links to authorities. We outline the thinking that went into Clever's design, report briefly on a study that compared Clever's performance to that of Yahoo and AltaVista, and examine how our system is being extended and updated  相似文献   

14.
传统搜索引擎仅可以索引浅层Web页面,然而在网络深处隐含着大量、高质量的信息,传统搜索引擎由于技术原因不能索引这些被称之为Deep Web的页面。由于查询接口是Deep Web的唯一入口,因此要获取Deep Web信息就需判定哪些网页表单是Deep Web查询接口。文中介绍了一种利用朴素贝叶斯分类算法自动判定网页表单是否为Deep Web查询接口的方法,并实验验证了该方法的有效性。  相似文献   

15.
一种Deep Web爬虫的设计与实现   总被引:1,自引:0,他引:1  
随着World Wide Web的快速发展,Deep Web中蕴含了越来越多的可供访问的信息.这些信息可以通过网页上的表单来获取,它们是由Deep Web后台数据库动态产生的.传统的Web爬虫仅能通过跟踪超链接检索普通的Surface Web页面,由于没有直接指向Deep Web页面的静态链接,所以当前大多数搜索引擎不能发现和索引这些页面.然而,与Surface Web相比,Deep Web中所包含的信息的质量更高,对我们更有价值.本文提出了一种利用HtmlUnit框架设计Deep Web爬虫的方法.它能够集成多个领域站点,通过分析查询表单从后台数据库中检索相关信息.实验结果表明此方法是有效的.  相似文献   

16.
Most Web pages contain location information, which are usually neglected by traditional search engines. Queries combining location and textual terms are called as spatial textual Web queries. Based on the fact that traditional search engines pay little attention in the location information in Web pages, in this paper we study a framework to utilize location information for Web search. The proposed framework consists of an offline stage to extract focused locations for crawled Web pages, as well as an online ranking stage to perform location-aware ranking for search results. The focused locations of a Web page refer to the most appropriate locations associated with the Web page. In the offline stage, we extract the focused locations and keywords from Web pages and map each keyword with specific focused locations, which forms a set of <keyword, location> pairs. In the second online query processing stage, we extract keywords from the query, and computer the ranking scores based on location relevance and the location-constrained scores for each querying keyword. The experiments on various real datasets crawled from nj.gov, BBC and New York Time show that the performance of our algorithm on focused location extraction is superior to previous methods and the proposed ranking algorithm has the best performance w.r.t different spatial textual queries.  相似文献   

17.
Deep Web数据源聚焦爬虫   总被引:2,自引:0,他引:2       下载免费PDF全文
Internet上有大量页面是由后台数据库动态产生的,这部分页面不能通过传统的搜索引擎访问,被称为Deep Web。数据源发现是大规模Deep Web数据源集成的关键步骤。该文提出一种针对Deep Web数据源的聚焦爬行算法。在评价链接重要性时,综合考虑了页面与主题的相关性和链接相关信息。实验证明该方法是有效的。  相似文献   

18.
Our current understanding of Web structure is based on large graphs created by centralized crawlers and indexers. They obtain data almost exclusively from the so-called surface Web, which consists, loosely speaking, of interlinked HTML pages. The deep Web, by contrast, is information that is reachable over the Web, but that resides in databases; it is dynamically available in response to queries, not placed on static pages ahead of time. Recent estimates indicate that the deep Web has hundreds of times more data than the surface Web. The deep Web gives us reason to rethink much of the current doctrine of broad-based link analysis. Instead of looking up pages and finding links on them, Web crawlers would have to produce queries to generate relevant pages. Creating appropriate queries ahead of time is nontrivial without understanding the content of the queried sites. The deep Web's scale would also make it much harder to cache results than to merely index static pages. Whereas a static page presents its links for all to see, a deep Web site can decide whose queries to process and how well. It can, for example, authenticate the querying party before giving it any truly valuable information and links. It can build an understanding of the querying party's context in order to give proper responses, and it can engage in dialogues and negotiate for the information it reveals. The Web site can thus prevent its information from being used by unknown parties. What's more, the querying party can ensure that the information is meant for it.  相似文献   

19.
《Computer Networks》1999,31(11-16):1331-1345
This paper discusses how to augment the World Wide Web with an open hypermedia service (Webvise) that provides structures such as contexts, links, annotations, and guided tours stored in hypermedia databases external to the Web pages. This includes the ability for users collaboratively to create links from parts of HTML Web pages they do not own and support for creating links to parts of Web pages without writing HTML target tags. The method for locating parts of Web pages can locate parts of pages across frame hierarchies and it also supports certain repairs of links that break due to modified Web pages. Support for providing links to/from parts of non-HTML data, such as sound and movie, will be possible via interfaces to plug-ins and Java-based media players.The hypermedia structures are stored in a hypermedia database, developed from the Devise Hypermedia framework, and the service is available on the Web via an ordinary URL. The best user interface for creating and manipulating the structures is currently provided for the Microsoft Internet Explorer 4.x browser through COM integration that utilizes the Explorer's DOM representation of Web-pages. But the structures can also be manipulated and used via special Java applets and a pure proxy server solution is provided for users who only need to browse the structures. A user can create and use the external structures as `transparency' layers on top of arbitrary Web pages, the user can switch between viewing pages with one or more layers (contexts) of structures or without any external structures imposed on them.  相似文献   

20.
Hawking  D. 《Computer》2006,39(6):86-88
In this article, we go behind the scenes and explain how this data processing "miracle" is possible. We focus on whole-of-Web search but note that enterprise search tools and portal search interfaces use many of the same data structures and algorithms. Search engines cannot and should not index every page on the Web. After all, thanks to dynamic Web page generators such as automatic calendars, the number of pages is infinite. To provide a useful and cost-effective service, search engines must reject as much low-value automated content as possible. In addition, they can ignore huge volumes of Web-accessible data, such as ocean temperatures and astrophysical observations, without harm to search effectiveness. Finally, Web search engines have no access to restricted content, such as pages on corporate intranets. What follows is not an inside view of any particular commercial engine - whose precise details are jealously guarded secrets - but a characterization of the problems that whole-of-Web search services face and an explanation of the techniques available to solve these problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号