首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Mining Navigation Patterns Using a Sequence Alignment Method   总被引:2,自引:0,他引:2  
In this article, a new method is illustrated for mining navigation patterns on a web site. Instead of clustering patterns by means of a Euclidean distance measure, in this approach users are partitioned into clusters using a non-Euclidean distance measure called the Sequence Alignment Method (SAM). This method partitions navigation patterns according to the order in which web pages are requested and handles the problem of clustering sequences of different lengths. The performance of the algorithm is compared with the results of a method based on Euclidean distance measures. SAM is validated by means of user-traffic data of two different web sites. Empirical results show that SAM identifies sequences with similar behavioral patterns not only with regard to content, but also considering the order of pages visited in a sequence.  相似文献   

2.
Correlation-Based Web Document Clustering for Adaptive Web Interface Design   总被引:2,自引:2,他引:2  
A great challenge for web site designers is how to ensure users' easy access to important web pages efficiently. In this paper we present a clustering-based approach to address this problem. Our approach to this challenge is to perform efficient and effective correlation analysis based on web logs and construct clusters of web pages to reflect the co-visit behavior of web site users. We present a novel approach for adapting previous clustering algorithms that are designed for databases in the problem domain of web page clustering, and show that our new methods can generate high-quality clusters for very large web logs when previous methods fail. Based on the high-quality clustering results, we then apply the data-mined clustering knowledge to the problem of adapting web interfaces to improve users' performance. We develop an automatic method for web interface adaptation: by introducing index pages that minimize overall user browsing costs. The index pages are aimed at providing short cuts for users to ensure that users get to their objective web pages fast, and we solve a previously open problem of how to determine an optimal number of index pages. We empirically show that our approach performs better than many of the previous algorithms based on experiments on several realistic web log files. Received 25 November 2000 / Revised 15 March 2001 / Accepted in revised form 14 May 2001  相似文献   

3.
A text mining approach for automatic construction of hypertexts   总被引:1,自引:0,他引:1  
The research on automatic hypertext construction emerges rapidly in the last decade because there exists a urgent need to translate the gigantic amount of legacy documents into web pages. Unlike traditional ‘flat’ texts, a hypertext contains a number of navigational hyperlinks that point to some related hypertexts or locations of the same hypertext. Traditionally, these hyperlinks were constructed by the creators of the web pages with or without the help of some authoring tools. However, the gigantic amount of documents produced each day prevent from such manual construction. Thus an automatic hypertext construction method is necessary for content providers to efficiently produce adequate information that can be used by web surfers. Although most of the web pages contain a number of non-textual data such as images, sounds, and video clips, text data still contribute the major part of information about the pages. Therefore, it is not surprising that most of automatic hypertext construction methods inherit from traditional information retrieval research. In this work, we will propose a new automatic hypertext construction method based on a text mining approach. Our method applies the self-organizing map algorithm to cluster some at text documents in a training corpus and generate two maps. We then use these maps to identify the sources and destinations of some important hyperlinks within these training documents. The constructed hyperlinks are then inserted into the training documents to translate them into hypertext form. Such translated documents will form the new corpus. Incoming documents can also be translated into hypertext form and added to the corpus through the same approach. Our method had been tested on a set of at text documents collected from a newswire site. Although we only use Chinese text documents, our approach can be applied to any documents that can be transformed to a set of index terms.  相似文献   

4.
Interval Set Clustering of Web Users with Rough K-Means   总被引:1,自引:0,他引:1  
Data collection and analysis in web mining faces certain unique challenges. Due to a variety of reasons inherent in web browsing and web logging, the likelihood of bad or incomplete data is higher than conventional applications. The analytical techniques in web mining need to accommodate such data. Fuzzy and rough sets provide the ability to deal with incomplete and approximate information. Fuzzy set theory has been shown to be useful in three important aspects of web and data mining, namely clustering, association, and sequential analysis. There is increasing interest in research on clustering based on rough set theory. Clustering is an important part of web mining that involves finding natural groupings of web resources or web users. Researchers have pointed out some important differences between clustering in conventional applications and clustering in web mining. For example, the clusters and associations in web mining do not necessarily have crisp boundaries. As a result, researchers have studied the possibility of using fuzzy sets in web mining clustering applications. Recent attempts have used genetic algorithms based on rough set theory for clustering. However, the genetic algorithms based clustering may not be able to handle the large amount of data typical in a web mining application. This paper proposes a variation of the K-means clustering algorithm based on properties of rough sets. The proposed algorithm represents clusters as interval or rough sets. The paper also describes the design of an experiment including data collection and the clustering process. The experiment is used to create interval set representations of clusters of web visitors.  相似文献   

5.
基于Web使用挖掘技术的聚类算法改进   总被引:1,自引:0,他引:1  
Web使用挖掘中的聚类算法可以聚集相似特性的用户和页面,以便从中提取有用的感兴趣的信息.通过深入分析基于Hamming距离的聚类算法,指出其中存在的不合理性和低效性,然后根据这些不足引入了加权的bipartite图来表示整个数据集,修改了Hamming距离计算公式以便更准确地描述两对象间的相似度,并对算法进行了改进.实验结果表明,改进的算法是准确且高效的.  相似文献   

6.
面向结构相似的网页聚类是网络数据挖掘的一项重要技术。传统的网页聚类没有给出网页簇中心的表示方式,在计算点簇间和簇簇间相似度时需要计算多个点对的相似度,这种聚类算法一般比使用簇中心的聚类算法慢,难以满足大规模快速增量聚类的需求。针对此问题,该文提出一种快速增量网页聚类方法FPC(Fast Page Clustering)。在该方法中,先提出一种新的计算网页相似度的方法,其计算速度是简单树匹配算法的500倍;给出一种网页簇中心的表示方式,在此基础上使用Kmeans算法的一个变种MKmeans(Merge-Kmeans)进行聚类,在聚类算法层面上提高效率;使用局部敏感哈希技术,从数量庞大的网页类集中快速找出最相似的类,在增量合并层面上提高效率。  相似文献   

7.
基于页面内容和站点结构的页面聚类挖掘算法   总被引:16,自引:0,他引:16  
提出了结合站点拓扑结构和Web页面内容的页面聚类改进算法,改进算法引入Web页面的内容链接比和页组的组内链接度,并修改了频繁访问页组支持度的计算公式,以此来提高挖掘结果的兴趣性.通过实验数据的比较,改进算法较一般算法的收敛性好,发现的频繁访问页组的兴趣性高.  相似文献   

8.
基于矩阵聚类的电子商务网站个性化推荐系统   总被引:7,自引:0,他引:7  
提出一种基于“矩阵聚类”的电子商务网站个性化推荐系统,通过分析Web server日志文件中的访问页面序列行为数据,构建较高购买者的顾客行为的矩阵模型;并使用一种新型的“矩阵聚类”算法挖掘潜在购买者与较高购买者的相似特征,从而帮助顾客发现他所希望购买的产品信息,用于提高实际购买量.该技术特别适合于目前大型的电子商务网站,实验数据表明,该系统是高效并可广泛使用.  相似文献   

9.
聚类及关联规则挖掘是数据挖掘领域中的两种重要方法。先使用聚类法将比较接近的数据分为同一簇,再分别对已经减少了数据量的每一簇作关联规则挖掘,这样,结合了两种方法的优点,改进了仅使用单一方法的缺点,能够获得更多的信息,有助于更加容易且有效地分析数据。  相似文献   

10.
结合使用挖掘和内容挖掘的web推荐服务   总被引:10,自引:1,他引:9  
随着Internet的基础结构不断扩大和其所含信息的持续增长,Internet用户越来越感觉容易在WWW服务中“资源迷向”。提高用户访问效率的方法有页面预取技术,站点动态重构技术和web个性化推荐技术等。现有的大多数web个性化推荐技术主要是基于用户使用记录的数据挖掘方法,没有或很少考虑结合页面内容—这才是用户真正感兴趣的。该文提出一种结合用户使用挖掘和内容挖掘的web推荐服务,该推荐服务根据频繁最大前向访问路径,提出含有导航页和内容页的频繁访问路径图概念,根据滑动窗口内的最近用户访问页面内容和候选推荐集中页面内容相关性,来向用户提供个性化推荐服务。经推荐质量分析,这种方法具有较好的推荐优化能力。  相似文献   

11.
自组织映射在Web结构挖掘中的应用   总被引:1,自引:0,他引:1  
该文讨论了用自组织映射进行Web结构挖掘的基本方法。用SOM可直观地表示数据的相似性和进行分类,还可方便地进行数据聚簇分析,并可在Web挖掘中找到权威页面等有用信息。  相似文献   

12.
对 Web 页面和用户的聚类算法提出了一种CAFM聚类算法.在该算法中,把模糊多重集的概念引入到模糊聚类算法中,将反映用户浏览行为的页面点击次数、停留时间、用户偏好等因素用模糊多重集来综合刻画用户访问站点的兴趣度,再以此来建立模糊多重相似矩阵直接进行聚类.通过实例说明了算法的具体计算过程和可行性.  相似文献   

13.
随着互联网的快速发展,Web上的数据飞速增长。面对海量的数据,如何从中找出有价值的信息,运用到商业决策的制定中,已经成为越来越多的人关心的课题。该文主要介绍了web数据挖掘的概念和分类,论述了在电子商务中web挖掘的过程和方法,揭示了数据挖掘在电子商务中广泛的应用前景。论文实现了一个面向多电子商务平台的数据挖掘系统,系统面对多电子商务平台,实现了统一的数据收集和预处理过程,对用户的访问日志进行分析,从网站、商品类别、商品等角度进行数据分析,并又对用户的访问数据进行挖掘,从这些数据中发现潜在的规律,把握用户动态,帮助企业制定商业决策,使电子商务更具个性化和针对性。  相似文献   

14.
Mining linguistic browsing patterns in the world wide web   总被引:2,自引:0,他引:2  
 World-wide-web applications have grown very rapidly and have made a significant impact on computer systems. Among them, web browsing for useful information may be most commonly seen. Due to its tremendous amounts of use, efficient and effective web retrieval has thus become a very important research topic in this field. Data mining is the process of extracting desirable knowledge or interesting patterns from existing databases for a certain purpose. In this paper, we use the data mining techniques to discover relevant browsing behavior from log data in web servers, thus being able to help make rules for retrieval of web pages. The browsing time of a customer on each web page is used to analyze the retrieval behavior. Since the data collected are numeric, fuzzy concepts are used to process them and to form linguistic terms. A sophisticated web-mining algorithm is thus proposed to find relevant browsing behavior from the linguistic data. Each page uses only the linguistic term with the maximum cardinality in later mining processes, thus making the number of fuzzy regions to be processed the same as the number of the pages. Computational time can thus be greatly reduced. The patterns mined out thus exhibit the browsing behavior and can be used to provide some appropriate suggestions to web-server managers.  相似文献   

15.
针对互联网站点信息海量和结构复杂的趋势,推荐系统被用来协助互联网用户方便快捷地找到所需信息,培养用户忠诚度。Web挖掘技术在处理海量数据和稀疏数据上有着先天的优势,所以Web挖掘技术在推荐系统中得到了越来越广泛的研究和应用。基于Web挖掘的推荐系统所使用的主要技术有聚类、关联规则、序列模式等等。然而,这些技术往往不能在推荐的准确性和覆盖范围方面做到两全。综合这几种技术,取其优点去其缺点,提出了一种新的算法(AIR算法)。通过基于实际使用数据的详尽的实验评估,可以证明该算法能够在准确性和覆盖范围方面明显提高推荐系统的整体性能。  相似文献   

16.
针对小文本的Web数据挖掘技术及其应用   总被引:4,自引:2,他引:4  
现有搜索引擎技术返回给用户的信息太多太杂,为此提出一种针对小文本的基于近似网页聚类算法的Web文本数据挖掘技术,该技术根据用户的兴趣程度形成词汇库,利用模糊聚类方法获得分词词典组,采用MD5算法去除重复页面,采用近似网页聚类算法对剩余页面聚类,并用马尔可夫Web序列挖掘算法对聚类结果排序,从而提供用户感兴趣的网页簇序列,使用户可以迅速找到感兴趣的页面。实验证明该算法在保证查全率和查准率的基础上大大提高了搜索效率。由于是针对小文本的数据挖掘,所研究的算法时间和空间复杂度都不高,因此有望成为一种实用、有效的信息检索技术。  相似文献   

17.
刘先熙 《数字社区&智能家居》2009,5(7):5086-5087,5095
随着Intemet/Web技术的快速普及和迅猛发展,各种信息可以以非常低的成本在网络上获得。如何在这些信息中找到用户真正需要的内容,成为数据组织和Web相关领域专家学者关注的焦点。Web数据挖掘旨在发现隐藏在Web数据中潜在的有用知识、提供决策支持,已经成为数据挖掘领域中新兴的研究热点。该文主要从Web内容挖掘、Web结构挖掘和Web使用挖掘三个方面阐述Web数据挖掘的基本知识。  相似文献   

18.
Web mining is a concept that gathers all techniques, methods and algorithms used to extract information and knowledge from data originating on the web (web data). A part of this technique aims to analyze the behavior of users in order to continuously improve both the structure and content of visited web sites. Behind this quite altruistic belief – namely, to help the user feel comfortable when they visit a site through a personalization process – there underlie a series of processing methodologies which operate at least arguably from the point of view of the users’ privacy.Thus, an important question arises; to what extent may the desire to improve the services offered through a web site infringe upon the privacy of those who visit it? The use of powerful processing tools such as those provided by web mining may threaten users’ privacy.Current legal scholarship on privacy issues suggests a flexible approach that enables the determination, within each particular context, of those behaviors that can threaten individual privacy. However, it has been observed that TIC professionals, with the purpose of formulating practical rules on this matter, have a very narrow-minded concept of privacy, primarily centered on the dichotomy between personal identifiable information (PII) and anonymous data.The aim of this paper is to adopt an integrative approach based on the distinctive attributes of web mining in order to determine which techniques and uses are harmful.  相似文献   

19.
基于用户模式聚类的智能信息推荐算法   总被引:1,自引:0,他引:1  
何波  杨武  张建勋  王越 《计算机工程与设计》2006,27(13):2360-2361,2374
基于数据挖掘的智能信息推荐日益成为一个重要的研究课题。针对现有智能信息推荐算法存在的不足,提出了一种基于用户模式聚类的智能信息推荐算法(IRUMC)。该算法将相似的用户模式聚类到一起,生成用户聚类模式,然后将用户访问操作与用户聚类模式进行匹配,最后形成推荐集。它比较适合新用户、访问站点较少的用户和有新颖性信息需求的用户。实验结果表明,该算法是有效的。  相似文献   

20.
夏斌  徐彬 《电脑开发与应用》2007,20(5):16-17,20
针对目前搜索引擎返回候选信息过多从而使用户不能准确查找与主题有关结果的问题,提出了基于超链接信息的搜索引擎检索结果聚类方法,通过对网页的超链接锚文档和网页文档内容挖掘,最终将网页聚成不同的子类别。这种方法在依据网页内容进行聚类的同时,充分利用了Web结构和超链接信息,比传统的结构挖掘方法更能体现网站文档的内容特点,从而提高了聚类的准确性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号