共查询到17条相似文献,搜索用时 140 毫秒
1.
《计算机应用与软件》2016,(11)
针对当前朴素贝叶斯文本分类算法在处理文本分类时存在的数据稀疏、分类不准及效率低的问题,提出一种基于Map Reduce的Dirichlet朴素贝叶斯文本分类算法。算法首先根据体征词语义因素以及类内分布情况对权重进行加权调整,以此对的计算公式进行修正;引入统计语言建模技术中的Dirichlet数据平滑方法来降低数据稀疏对分类性能的影响,并在Hadoop云计算平台采用Map Reduce编程模型实现本文算法的并行化。通过测试实验对比分析可知,该算法显著提高了传统朴素贝叶斯文本分类算法的准确率、召回率,并具有优良的可扩展性和数据处理能力。 相似文献
2.
3.
作为开源云计算平台的核心技术之一,Map Reduce作业处理框架及其作业调度算法,对整个系统的性能起着至关重要的作用,而数据本地性是衡量作业调度算法好坏的一个重要标准,首先本文介绍和分析了Map Reduce基本原理,Map Reduce作业处理机制和Map Reduce作业调度机制及其在数据本地性方面表现出的优缺点等相关内容。其次,针对原生作业调度算法在数据本地性考虑不周全的问题,结合数据预取技术的可行性与优势,通过引入资源预取技术设计并实现一种基于资源预取的Hadoop Map Reduce作业调度算法,使作业执行效率更高。 相似文献
4.
为解决多分类器融合过程中时间开销大和准确率不高的问题,采用改进的Bagging方法并结合MapReduce技术,提出了一种基于选择性集成的并行多分类器融合方法PMCF-SE。该方法基于MapReduce并行计算架构。在Map阶段,选择分类效果较好的基分类器;在Reduce阶段,从所选的基分类器中选择差异性较大的基分类器,然后采用D-S证据理论融合被选的基分类器。实验结果表明,在执行效率方面,与单机环境相比,集群环境下该方法的执行效率有所提高;在分类准确率方面,与Bagging算法相比,PMCF-SE在不同的基分类器数目下的分类准确率都高于Bagging算法。 相似文献
5.
随着互联网的到来,其技术的发展导致了各种数据呈现出爆发式的增长,比如文本数据,分类算法在海量数据前面临着新的挑战。为了解决传统朴素贝叶斯分类算法在面临挑战中的不足,对其中关键词进行加权来提高分类准确率,然后通过Map Reduce编程模型,设计出朴素贝叶斯算法在Hadoop平台下的实现。实验表明:在Hadoop集群上通过并行化的设计朴素贝叶斯分类算法展现出了良好的性能,同时表现出了可靠的扩展性。 相似文献
6.
针对高维、维度分层的大数据集,提出一种基于Map/Reduce框架的并行外壳片段立方体构建算法。算法采用Map/Reduce框架,实现外壳片段立方体的并行构建与查询。构建算法在Map过程中,计算出各个数据分块所有可能的数据单元或层次维编码前缀;在Reduce过程中,聚合计算得到最终的外壳片段和度量索引表。实验证明,并行外壳片段立方体算法一方面结合了Map/Reduce框架的并行性和高扩展性,另一方面结合了外壳片段立方体的压缩策略和倒排索引机制,能够有效避免高维数据物化时数据量的爆炸式增长,提供快速构建和查询操作。 相似文献
7.
《计算机应用与软件》2015,(7)
通过对文本情感分类的研究,考虑微博文本信息的篇幅短小、情感符号丰富及大量网络词汇的特点,提出一种适用于中文微博情感分类的基于Map/Reduce的分布式朴素贝叶斯算法。算法通过构建适用于微博文本的情感词典来完成情感特征属性的提取,以期达到较为理想的分类效果。实验结果表明,这种方法能够很好地适用于微博情感分类,达到较理想的分类效果,满足针对海量的微博文本数据处理的可行性与高效性的需求。 相似文献
8.
文本分类是信息检索和文本挖掘的重要基础,朴素贝叶斯是一种简单而高效的分类算法,可以应用于文本分类.但是其属性独立性和属性重要性相等的假设并不符合客观实际,这也影响了它的分类效果.如何克服这种假设,进一步提高其分类效果是朴素贝叶斯文本分类算法的一个难题.根据文本分类的特点,基于文本互信息的相关理论,提出了基于互信息的特征项加权朴素贝叶斯文本分类方法,该方法使用互信息对不同类别中的特征项进行分别赋权,部分消除了假设对分类效果的影响.通过在UCIKDD数据集上的仿真实验,验证了该方法的有效性. 相似文献
9.
Bagging算法是目前一种流行的集成学习算法,采用一种改进的Bagging算法Attribute Bagging作为分类算法,通过属性重取样获取多个训练集,以kNN为弱分类器设计一种中文文本分类器。实验结果表明Attribute Bagging算法较Bagging算法有更好的分类精度。 相似文献
10.
11.
12.
Automatic keyword extraction is an important research direction in text mining, natural language processing and information retrieval. Keyword extraction enables us to represent text documents in a condensed way. The compact representation of documents can be helpful in several applications, such as automatic indexing, automatic summarization, automatic classification, clustering and filtering. For instance, text classification is a domain with high dimensional feature space challenge. Hence, extracting the most important/relevant words about the content of the document and using these keywords as the features can be extremely useful. In this regard, this study examines the predictive performance of five statistical keyword extraction methods (most frequent measure based keyword extraction, term frequency-inverse sentence frequency based keyword extraction, co-occurrence statistical information based keyword extraction, eccentricity-based keyword extraction and TextRank algorithm) on classification algorithms and ensemble methods for scientific text document classification (categorization). In the study, a comprehensive study of comparing base learning algorithms (Naïve Bayes, support vector machines, logistic regression and Random Forest) with five widely utilized ensemble methods (AdaBoost, Bagging, Dagging, Random Subspace and Majority Voting) is conducted. To the best of our knowledge, this is the first empirical analysis, which evaluates the effectiveness of statistical keyword extraction methods in conjunction with ensemble learning algorithms. The classification schemes are compared in terms of classification accuracy, F-measure and area under curve values. To validate the empirical analysis, two-way ANOVA test is employed. The experimental analysis indicates that Bagging ensemble of Random Forest with the most-frequent based keyword extraction method yields promising results for text classification. For ACM document collection, the highest average predictive performance (93.80%) is obtained with the utilization of the most frequent based keyword extraction method with Bagging ensemble of Random Forest algorithm. In general, Bagging and Random Subspace ensembles of Random Forest yield promising results. The empirical analysis indicates that the utilization of keyword-based representation of text documents in conjunction with ensemble learning can enhance the predictive performance and scalability of text classification schemes, which is of practical importance in the application fields of text classification. 相似文献
13.
R. Krishnaswamy Kamalraj Subramaniam V. Nandini K. Vijayalakshmi Seifedine Kadry Yunyoung Nam 《计算机系统科学与工程》2023,44(1):391-406
Recently, a massive quantity of data is being produced from a distinct number of sources and the size of the daily created on the Internet has crossed two Exabytes. At the same time, clustering is one of the efficient techniques for mining big data to extract the useful and hidden patterns that exist in it. Density-based clustering techniques have gained significant attention owing to the fact that it helps to effectively recognize complex patterns in spatial dataset. Big data clustering is a trivial process owing to the increasing quantity of data which can be solved by the use of Map Reduce tool. With this motivation, this paper presents an efficient Map Reduce based hybrid density based clustering and classification algorithm for big data analytics (MR-HDBCC). The proposed MR-HDBCC technique is executed on Map Reduce tool for handling the big data. In addition, the MR-HDBCC technique involves three distinct processes namely pre-processing, clustering, and classification. The proposed model utilizes the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) technique which is capable of detecting random shapes and diverse clusters with noisy data. For improving the performance of the DBSCAN technique, a hybrid model using cockroach swarm optimization (CSO) algorithm is developed for the exploration of the search space and determine the optimal parameters for density based clustering. Finally, bidirectional gated recurrent neural network (BGRNN) is employed for the classification of big data. The experimental validation of the proposed MR-HDBCC technique takes place using the benchmark dataset and the simulation outcomes demonstrate the promising performance of the proposed model interms of different measures. 相似文献
14.
MapReduce是目前广泛应用的并行计算框架,是Hadoop平台的重要组成部分。主要包括Map函数和Reduce函数。Map函数输出key-value键值对作为Reduce的输入,由于输入的动态性,不同主机上的Reduce处理的输入量存在不均衡性。如何解决Reduce的负载均衡是优化MapReduce的一个重要研究方向。本文首先对整体数据进行抽样,通过适量的样本分析数据,达到较小的代价获得可靠的key分布;然后,提出贪心算法代替Hadoop平台默认的hash算法来划分数据,实现Reduce负载均衡。本文所提贪心算法主要思想是根据抽样数据,求取所有key频次的和对于Reduce节点数量的平均值,然后依次为每一个Reduce分配一个接近平均值的负载,从而达到整体的负载均衡。模拟实验表明,本文所提算法与默认的hash分区算法相比,运行时间节约10.6%,达到更好的负载均衡。 相似文献
15.
张家瑞 《网络安全技术与应用》2014,(11):49-49
随着我国经济的不断发展,我国逐渐进入信息时代,信息时代的到来极大的为人们提供便捷的生活服务,为了跟随时代的步伐Map Reduce的数据技术应运而生,Map Reduce是Map(映射)与Reduce(化简)的相结合,最初这些都只是简单的函数式编程语言,后来被应用在高科技的编程模式中,Map Reduce具有一定的矢量编程语言特征,这些高科技的编程模式阻碍着编写人员对分散的程序进行重写编写,本文针对Map Reduce数据挖掘平台的弊端来分析,研究Map Reduce的数据挖掘平台的特征,最后对Map Reduce的数据挖掘平台进行全方位的探究。 相似文献
16.