首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Computer Networks》1999,31(11-16):1467-1479
When using traditional search engines, users have to formulate queries to describe their information need. This paper discusses a different approach to Web searching where the input to the search process is not a set of query terms, but instead is the URL of a page, and the output is a set of related Web pages. A related Web page is one that addresses the same topic as the original page. For example, www.washingtonpost.com is a page related to www.nytimes.com, since both are online newspapers.We describe two algorithms to identify related Web pages. These algorithms use only the connectivity information in the Web (i.e., the links between pages) and not the content of pages or usage information. We have implemented both algorithms and measured their runtime performance. To evaluate the effectiveness of our algorithms, we performed a user study comparing our algorithms with Netscape's `What's Related' service (http://home.netscape.com/escapes/related/). Our study showed that the precision at 10 for our two algorithms are 73% better and 51% better than that of Netscape, despite the fact that Netscape uses both content and usage pattern information in addition to connectivity information.  相似文献   

2.
Given a user keyword query, current Web search engines return a list of individual Web pages ranked by their "goodness" with respect to the query. Thus, the basic unit for search and retrieval is an individual page, even though information on a topic is often spread across multiple pages. This degrades the quality of search results, especially for long or uncorrelated (multitopic) queries (in which individual keywords rarely occur together in the same document), where a single page is unlikely to satisfy the user's information need. We propose a technique that, given a keyword query, on the fly generates new pages, called composed pages, which contain all query keywords. The composed pages are generated by extracting and stitching together relevant pieces from hyperlinked Web pages and retaining links to the original Web pages. To rank the composed pages, we consider both the hyperlink structure of the original pages and the associations between the keywords within each page. Furthermore, we present and experimentally evaluate heuristic algorithms to efficiently generate the top composed pages. The quality of our method is compared to current approaches by using user surveys. Finally, we also show how our techniques can be used to perform query-specific summarization of Web pages.  相似文献   

3.
Adapting Web pages for small-screen devices   总被引:3,自引:0,他引:3  
We propose a page-adaptation technique that splits existing Web pages into smaller, logically related units. To do this, we must first solve two technical problems: how to detect an existing Web page's semantic structure, and how to split a Web page into smaller blocks based on that structure. To date, we've implemented our technique in Web browsers for mobile devices, in a proxy server for adapting Web pages on the fly, and as an authoring tool plug-in for converting existing Web pages. The Web page can then be adapted to form a two-level hierarchy with a thumbnail representation at the top level for providing a global view and an index to a set of subpages at the bottom level for detailed information.  相似文献   

4.
Hawking  D. 《Computer》2006,39(6):86-88
In this article, we go behind the scenes and explain how this data processing "miracle" is possible. We focus on whole-of-Web search but note that enterprise search tools and portal search interfaces use many of the same data structures and algorithms. Search engines cannot and should not index every page on the Web. After all, thanks to dynamic Web page generators such as automatic calendars, the number of pages is infinite. To provide a useful and cost-effective service, search engines must reject as much low-value automated content as possible. In addition, they can ignore huge volumes of Web-accessible data, such as ocean temperatures and astrophysical observations, without harm to search effectiveness. Finally, Web search engines have no access to restricted content, such as pages on corporate intranets. What follows is not an inside view of any particular commercial engine - whose precise details are jealously guarded secrets - but a characterization of the problems that whole-of-Web search services face and an explanation of the techniques available to solve these problems.  相似文献   

5.
Effectively finding relevant Web pages from linkage information   总被引:3,自引:0,他引:3  
This paper presents two hyperlink analysis-based algorithms to find relevant pages for a given Web page (URL). The first algorithm comes from the extended cocitation analysis of the Web pages. It is intuitive and easy to implement. The second one takes advantage of linear algebra theories to reveal deeper relationships among the Web pages and to identify relevant pages more precisely and effectively. The experimental results show the feasibility and effectiveness of the algorithms. These algorithms could be used for various Web applications, such as enhancing Web search. The ideas and techniques in this work would be helpful to other Web-related research.  相似文献   

6.
面向信息检索需要的网络数据清理研究   总被引:2,自引:0,他引:2  
Web数据中的质量参差不齐、可信度不高以及冗余现象造成了网络信息检索工具存储和运算资源的极大浪费,并直接影响着检索性能的提高。现有的网络数据清理方式并非专门针对网络信息检索的需要,因而存在着较大不足。本文根据对检索用户的查询行为分析,提出了一种利用查询无关特征分析和先验知识学习的方法计算页面成为检索结果页面的概率,从而进行网络数据清理的算法。基于文本信息检索会议标准测试平台的实验结果证明,此算法可以在保留近95%检索结果页面的基础上清理占语料库页面总数45%以上的低质量页面,这意味着使用更少的存储和运算资源获取更高的检索性能将成为可能。  相似文献   

7.
熊忠阳  蔺显强  张玉芳  牙漫 《计算机工程》2013,(12):200-203,210
网页中存在正文信息以及与正文无关的信息,无关信息的存在对Web页面的分类、存储及检索等带来负面的影响。为降低无关信息的影响,从网页的结构特征和文本特征出发,提出一种结合网页结构特征与文本特征的正文提取方法。通过正则表达式去除网页中的无关元素,完成对网页的初次过滤。根据网页的结构特征对网页进行线性分块,依据各个块的文本特征将其区分为链接块与文本块,并利用噪音块连续出现的结果完成对正文部分的定位,得到网页正文信息。实验结果表明,该方法能够快速准确地提取网页的正文内容。  相似文献   

8.
The Web has evolved into a dominant digital medium for conducting many types of online transactions such as shopping, paying bills, making travel plans, etc. Such transactions typically involve a number of steps spanning several Web pages. For sighted users these steps are relatively straightforward to do with graphical Web browsers. But they pose tremendous challenges for visually impaired individuals. This is because screen readers, the dominant assistive technology used by visually impaired users, function by speaking out the screen’s content serially. Consequently, using them for conducting transactions can cause considerable information overload. But usually one needs to browse only a small fragment of a Web page to do a step of a transaction (e.g., choosing an item from a search results list). Based on this observation this paper presents a model-directed transaction framework to identify, extract and aurally render only the “relevant” page fragments in each step of a transaction. The framework uses a process model to encode the state of the transaction and a concept model to identify the page fragments relevant for the transaction in that state. We also present algorithms to mine such models from click stream data generated by transactions and experimental evidence of the practical effectiveness of our models in improving user experience when conducting online transactions with non-visual modalities.  相似文献   

9.
This paper presents the QA-Pagelet as a fundamental data preparation technique for large-scale data analysis of the deep Web. To support QA-Pagelet extraction, we present the Thor framework for sampling, locating, and partioning the QA-Pagelets from the deep Web. Two unique features of the Thor framework are 1) the novel page clustering for grouping pages from a deep Web source into distinct clusters of control-flow dependent pages and 2) the novel subtree filtering algorithm that exploits the structural and content similarity at subtree level to identify the QA-Pagelets within highly ranked page clusters. We evaluate the effectiveness of the Thor framework through experiments using both simulation and real data sets. We show that Thor performs well over millions of deep Web pages and over a wide range of sources, including e-commerce sites, general and specialized search engines, corporate Web sites, medical and legal resources, and several others. Our experiments also show that the proposed page clustering algorithm achieves low-entropy clusters, and the subtree filtering algorithm identifies QA-Pagelets with excellent precision and recall.  相似文献   

10.
Cellular phones are widely used to access the Web. However, most available Web pages are designed for desktop PCs, and it is inconvenient to browse these large Web pages on a cellular phone with a small screen and poor interfaces. Users who browse a Web page on a cellular phone have to scroll through the whole page to find the desired content, and must then search and scroll within that content in detail to get useful information. This paper describes the design and implementation of a novel Web browsing system for cellular phones. This system includes a Web page overview to reduce scrolling operations when finding objective content within the page. Furthermore, it adaptively presents content according to its characteristics to reduce burdensome operations when searching within content.  相似文献   

11.
To increase the commercial value and accessibility of pages, most content sites tend to publish their pages with intrasite redundant information, such as navigation panels, advertisements, and copyright announcements. Such redundant information increases the index size of general search engines and causes page topics to drift. In this paper, we study the problem of mining intrapage informative structure in news Web sites in order to find and eliminate redundant information. Note that intrapage informative structure is a subset of the original Web page and is composed of a set of fine-grained and informative blocks. The intrapage informative structures of pages in a news Web site contain only anchors linking to news pages or bodies of news articles. We propose an intrapage informative structure mining system called WISDOM (Web intrapage informative structure mining based on the document object model) which applies Information Theory to DOM tree knowledge in order to build the structure. WISDOM splits a DOM tree into many small subtrees and applies a top-down informative block searching algorithm to select a set of candidate informative blocks. The structure is built by expanding the set using proposed merging methods. Experiments on several real news Web sites show high precision and recall rates which validates WISDOM'S practical applicability.  相似文献   

12.
A rapidly increasing number of Web databases are now become accessible via their HTML form-based query interfaces. Query result pages are dynamically generated in response to user queries, which encode structured data and are displayed for human use. Query result pages usually contain other types of information in addition to query results, e.g., advertisements, navigation bar etc. The problem of extracting structured data from query result pages is critical for web data integration applications, such as comparison shopping, meta-search engines etc, and has been intensively studied. A number of approaches have been proposed. As the structures of Web pages become more and more complex, the existing approaches start to fail, and most of them do not remove irrelevant contents which may affect the accuracy of data record extraction. We propose an automated approach for Web data extraction. First, it makes use of visual features and query terms to identify data sections and extracts data records in these sections. We also represent several content and visual features of visual blocks in a data section, and use them to filter out noisy blocks. Second, it measures similarity between data items in different data records based on their visual and content features, and aligns them into different groups so that the data in the same group have the same semantics. The results of our experiments with a large set of Web query result pages in di?erent domains show that our proposed approaches are highly effective.  相似文献   

13.
一种互联网信息智能搜索新方法   总被引:10,自引:1,他引:9  
提出了一种互联网信息智能搜索新方法。该方法能够从组织结构和内容描述类似的同类网站中,准确有效搜索出隐藏于其内部的目标网页。为此它采用了将网页间相互关联特征与网页内容特征描述有机结合而形成的一种新的搜索知识表示方法。基于这种知识表示方法及其所表示的知识;该智能搜索方法不仅能够对风站中网页进行深度优先的智能搜索,而且还能够通过对其搜索过程和结果的自学习来获取更多更好的搜索知识。初步实验结果表明,这种智能搜索新方法在对同类型网站的目标网页搜索中具有很强的深度网页搜索能力。  相似文献   

14.
Despite difficulties in using the Web, older adults are motivated to use it. This paper reports on work underway to ease Web access for this population. Although Web accessibility standards provide specifications that Web content providers must incorporate if their pages are to be accessible, these standards do not guarantee a good experience for all Web users. This paper will discuss user controls that make a number of dynamic adaptations to page presentation and input that can greatly increase the usability of Web pages for older users. The paper will discuss the authors original work on the topic, lessons learned, and usage patterns. Current extensions to that work are also discussed.  相似文献   

15.
随着Web技术的发展和Web上越来越多的各种信息,如何提供高质量、相关的查询结果成为当前Web搜索引擎的一个巨大挑战.PageRank和HITS是两个最重要的基于链接的排序算法并在商业搜索引擎中使用.然而,在PageRank算法中,每个网页的PR值被平均地分配到它所指向的所有网页,网页之间的质量差异被完全忽略.这样的算法很容易被当前的Web SPAM攻击.基于这样的认识,提出了一个关于PageRank算法的改进,称为Page Quality Based PageRank(QPR)算法.QPR算法动态地评估每个网页的质量,并根据网页的质量对每个网页的PR值做相应公平的分配.在多个不同特性的数据集上进行了全面的实验,实验结果显示,提出的QPR算法能大大提高查询结果的排序,并能有效减轻SPAM网页对查询结果的影响.  相似文献   

16.
Mapping the semantics of Web text and links   总被引:1,自引:0,他引:1  
Search engines use content and links to search, rank, cluster, and classify Web pages. These information discovery applications use similarity measures derived from this data to estimate relatedness between pages. However, little research exists on the relationships between similarity measures or between such measures and semantic similarity. The author analyzes and visualizes similarity relationships in massive Web data sets to identify how to integrate content and link analysis for approximating relevance. He uses human-generated metadata from Web directories to estimate semantic similarity and semantic maps to visualize relationships between content and link cues and what these cues suggest about page meaning. Highly heterogeneous topical maps point to a critical dependence on search context.  相似文献   

17.
针对海量网页数据挖掘问题,提出基于向量空间的网页内容相似计算算法和软件系统框架。利用搜索引擎从海量网页中提取中文编码的网页URL,在此基础上提取网页的中文字符并分析提取出中文实词,建立向量空间模型计算网页内容间的相似度。该系统缩小了需要进行相似度计算的网页文档范围,节约大量时间和空间资源,为网络信息的分类、查询、智能化等奠定了良好的基础。  相似文献   

18.
Web页面信息块的自动分割   总被引:8,自引:2,他引:8  
随着Internet的发展,Web页面数量的急剧增加,如何快速有效地获取信息变得越来越重要。一类Web页面往往包含着多个信息单元,它们在展现上排列紧凑、风格相似,在HTML语法上具有类似的模式,例如一个BBS页面上多个发言,每个信息被称为一个信息块。对于信息抽取、信息过滤等应用,需要首先将原始页面中分割为若干合适的信息块以便于后续的处理。本文提出了一种自动将Web页面分割为信息块的方法:首先通过创建Web页面结构化的HMTL分析树,然后根据包含有效文本量等确定包含信息块的子树,最后根据子树深度信息利用2-rank PAT算法进行分割。通过对BBS页面的信息块抽取实验,证明了该方法的有效性。  相似文献   

19.
有很多不同的分块算法都可以对web网页进行分块.研究分块的1/1的是为了相关领域进一步研究的需要。例如通过页面块内容的重要程度研究基于块的搜索、定位网页的重要主题或内容,研究网页主要内容或主题的抽取,以及基于Web页面分块的Web存档等。首先给出Web页面分块问题定义和分类,并对几种典型的分块算法进行原理剖析,为进一步研究web页面分块问题提供一些有益的参考。  相似文献   

20.
World Wide Web is a continuously growing giant, and within the next few years, Web contents will surely increase tremendously. Hence, there is a great requirement to have algorithms that could accurately classify Web pages. Automatic Web page classification is significantly different from traditional text classification because of the presence of additional information, provided by the HTML structure. Recently, several techniques have been arisen from combinations of artificial intelligence and statistical approaches. However, it is not a simple matter to find an optimal classification technique for Web pages. This paper introduces a novel strategy for vertical Web page classification, which is called Classification using Multi-layered Domain Ontology (CMDO). It employs several Web mining techniques, and depends mainly on proposed multi-layered domain ontology. In order to promote the classification accuracy, CMDO implies a distiller to reject pages related to other domains. CMDO also employs a novel classification technique, which is called Graph Based Classification (GBC). The proposed GBC has pioneering features that other techniques do not have, such as outlier rejection and pruning. Experimental results have shown that CMDO outperforms recent techniques as it introduces better precision, recall, and classification accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号