首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到16条相似文献,搜索用时 93 毫秒
1.
朴素贝叶斯分类算法简单且高效, 但其基于属性间强独立性的假设限制了其应用范围. 针对这一问题, 提出一种基于属性选择的改进加权朴素贝叶斯分类算法(ASWNBC). 该算法将基于相关的属性选择算法(CFS)和加权朴素贝叶斯分类算法(WNBC)相结合, 首先使用CFS算法获得属性子集使简化后的属性集尽量满足条件独立性, 同时根据不同属性取值对分类结果影响的不同设计新权重作为算法的加权系数, 最后使用ASWNBC算法进行分类. 实验结果表明, 该算法在降低分类消耗时间的同时提高了分类准确率, 有效地提高了朴素贝叶斯分类算法的性能.  相似文献   

2.
基于属性加权的朴素贝叶斯分类算法   总被引:3,自引:0,他引:3       下载免费PDF全文
朴素贝叶斯分类是一种简单而高效的方法,但是它的属性独立性假设,影响了它的分类性能。通过放松朴素贝叶斯假设可以增强其分类效果,但通常会导致计算代价大幅提高。提出了属性加权朴素贝叶斯算法,该算法通过属性加权来提高朴素贝叶斯分类器性能,加权参数直接从训练数据中学习得到。权值可以看作是计算某个类的后验概率时,某属性取值对该类别的影响程度。实验结果表明,该算法可行而且有效。  相似文献   

3.
针对朴素贝叶斯算法存在的三方面约束和限制,提出一种数据缺失条件下的贝叶斯优化算法。该算法计算任两个属性的灰色相关度,根据灰色相关度完成相关属性的联合、冗余属性的删除和属性加权;根据灰色相关度执行改进EM算法完成缺失数据的填补,对经过处理的数据集用朴素贝叶斯算法进行分类。实验结果验证了该优化算法的有效性。  相似文献   

4.
为实现对电气事故快速、准确和动态的分类,提出一种有机结合实例和属性加权的朴素贝叶斯电气事故分类方法(AIWNB)。朴素贝叶斯分类方法中的先验概率和条件概率采用两种实例加权方式加以改进,积极实例权值取决于各属性值频度的统计值,而消极实例权值通过逐条计算训练实例与测试实例间的相关性加以确定。属性权值则基于互信息定义为属性-属性相关性和属性-类相关性之间的残差。所提出的AIWNB方法将属性加权和实例加权有机结合在朴素贝叶斯统一框架内,利用高低压用户的电气实测数据进行验证,实验结果表明,与朴素贝叶斯相比,加权后的朴素贝叶斯方法更具竞争性,准确率和F1分数可提升3.09%和9.39%,证明所提的AIWNB算法在电气事故分类的实用性及有效性,并可推广至其他分类情形。  相似文献   

5.
朴素贝叶斯分类器难以获得大量有类标签的训练集,而且传统的贝叶斯分类方法在有新的训练样本加入时,需要重新学习已学习过的样本,耗费大量时间。为此引入增量学习方法,在此基础上提出了属性加权朴素贝叶斯算法,该算法通过属性加权来提高朴素贝叶斯分类器的性能,加权参数直接从训练数据中学习得到。通过由Weka推荐的UCI数据集的实验结果表明,该算法是可行的和有效的。  相似文献   

6.
基于Rough Set的加权朴素贝叶斯分类算法   总被引:8,自引:1,他引:8  
朴素贝叶斯算法是一种简单而高效的分类算法,但其条件独立性假设并不符合客观实际,这在某种程度上影响了它的分类性能。加权朴素贝叶斯是对它的一种扩展。基于Rough Set的属性重要性理论,提出了基于Rough Set的加权朴素贝叶斯分类方法,并分别从代数观、信息观及综合代数观和信息观的角度给出了属性权值的求解方法。通过在UCI数据集上的仿真实验,验证了该方法的有效性。  相似文献   

7.
朴素贝叶斯分类是一种简单而高效的方法,但是它的属性独立性假设,影响了它的分类性能。针对这种问题,本文提出一种基于属性加权的朴素贝叶斯分类算法。通过分析研究属性之间的相关性,求出条件属性与决策属性的相关系数,同时结合信息论中所涉及的互信息概念,获得新的权重,对不同的条件属性给予不同的权值,从而在保持简单性的基础上有效地提高了朴素贝叶斯算法的分类性能。实验结果表明,该方法可行而且有效。  相似文献   

8.
分类算法一直以来都是数据挖掘领域的研究重点,朴素贝叶斯分类算法是众多优秀分类算法之一,但由于其条件属性必需独立,使得该算法也存在着一定的局限性。为了从另外一种角度来改进该算法,提高分类性能,提出了一种基于K-近邻法的局部加权朴素贝叶斯分类算法。使用K-近邻法对属性加权,找到最合适的加权值,运用加权后的朴素贝叶斯分类算法去分类,实验表明该算法提高了分类的可靠性与准确率。  相似文献   

9.
属性加权的朴素贝叶斯集成分类器   总被引:2,自引:1,他引:1       下载免费PDF全文
为提高朴素贝叶斯分类器的分类精度和泛化能力,提出了基于属性相关性的加权贝叶斯集成方法(WEBNC)。根据每个条件属性与决策属性的相关度对其赋以相应的权值,然后用AdaBoost训练属性加权后的BNC。该分类方法在16个UCI标准数据集上进行了测试,并与BNC、贝叶斯网和由AdaBoost训练出的BNC进行比较,实验结果表明,该分类器具有更高的分类精度与泛化能力。  相似文献   

10.
朴素贝叶斯分类器是一种应用广泛且简单有效的分类算法,但其条件独立性的"朴素贝叶斯假设"与现实存在差异,这种假设限制朴素贝叶斯分类器分类的准确率。为削弱这种假设,利用改进的蝙蝠算法优化朴素贝叶斯分类器。改进的蝙蝠算法引入禁忌搜索机制和随机扰动算子,避免其陷入局部最优解,加快收敛速度。改进的蝙蝠算法自动搜索每个属性的权值,通过给每个属性赋予不同的权值,在计算代价不大幅提高的情况下削弱了类独立性假设且增强了朴素贝叶斯分类器的准确率。实验结果表明,该算法与传统的朴素贝叶斯和文献[6]的新加权贝叶斯分类算法相比,其分类效果更加精准。  相似文献   

11.
Due to being fast, easy to implement and relatively effective, some state-of-the-art naive Bayes text classifiers with the strong assumption of conditional independence among attributes, such as multinomial naive Bayes, complement naive Bayes and the one-versus-all-but-one model, have received a great deal of attention from researchers in the domain of text classification. In this article, we revisit these naive Bayes text classifiers and empirically compare their classification performance on a large number of widely used text classification benchmark datasets. Then, we propose a locally weighted learning approach to these naive Bayes text classifiers. We call our new approach locally weighted naive Bayes text classifiers (LWNBTC). LWNBTC weakens the attribute conditional independence assumption made by these naive Bayes text classifiers by applying the locally weighted learning approach. The experimental results show that our locally weighted versions significantly outperform these state-of-the-art naive Bayes text classifiers in terms of classification accuracy.  相似文献   

12.
Due to its simplicity, efficiency and efficacy, naive Bayes (NB) continues to be one of the top 10 data mining algorithms. A mass of improved approaches to NB have been proposed to weaken its conditional independence assumption. However, there has been little work, up to the present, on instance weighting filter approaches to NB. In this paper, we propose a simple, efficient, and effective instance weighting filter approach to NB. We call it attribute (feature) value frequency-based instance weighting and denote the resulting improved model as attribute value frequency weighted naive Bayes (AVFWNB). In AVFWNB, the weight of each training instance is defined as the inner product of its attribute value frequency vector and the attribute value number vector. The experimental results on 36 widely used classification problems show that AVFWNB significantly outperforms NB, yet at the same time maintains the computational simplicity that characterizes NB.  相似文献   

13.
Technical Note: Naive Bayes for Regression   总被引:1,自引:0,他引:1  
Frank  Eibe  Trigg  Leonard  Holmes  Geoffrey  Witten  Ian H. 《Machine Learning》2000,41(1):5-25
Despite its simplicity, the naive Bayes learning scheme performs well on most classification tasks, and is often significantly more accurate than more sophisticated methods. Although the probability estimates that it produces can be inaccurate, it often assigns maximum probability to the correct class. This suggests that its good performance might be restricted to situations where the output is categorical. It is therefore interesting to see how it performs in domains where the predicted value is numeric, because in this case, predictions are more sensitive to inaccurate probability estimates.This paper shows how to apply the naive Bayes methodology to numeric prediction (i.e., regression) tasks by modeling the probability distribution of the target value with kernel density estimators, and compares it to linear regression, locally weighted linear regression, and a method that produces model trees—decision trees with linear regression functions at the leaves. Although we exhibit an artificial dataset for which naive Bayes is the method of choice, on real-world datasets it is almost uniformly worse than locally weighted linear regression and model trees. The comparison with linear regression depends on the error measure: for one measure naive Bayes performs similarly, while for another it is worse. We also show that standard naive Bayes applied to regression problems by discretizing the target value performs similarly badly. We then present empirical evidence that isolates naive Bayes' independence assumption as the culprit for its poor performance in the regression setting. These results indicate that the simplistic statistical assumption that naive Bayes makes is indeed more restrictive for regression than for classification.  相似文献   

14.
基于朴素贝叶斯与ID3算法的决策树分类   总被引:2,自引:0,他引:2       下载免费PDF全文
v在朴素贝叶斯算法和ID3算法的基础上,提出一种改进的决策树分类算法。引入客观属性重要度参数,给出弱化的朴素贝叶斯条件独立性假设,并采用加权独立信息熵作为分类属性的选取标准。理论分析和实验结果表明,改进算法能在一定程度上克服ID3算法的多值偏向问题,并且具有较高的执行效率和分类准确度。  相似文献   

15.
In real-world data mining applications, it is often the case that unlabeled instances are abundant, while available labeled instances are very limited. Thus, semi-supervised learning, which attempts to benefit from large amount of unlabeled data together with labeled data, has attracted much attention from researchers. In this paper, we propose a very fast and yet highly effective semi-supervised learning algorithm. We call our proposed algorithm Instance Weighted Naive Bayes (simply IWNB). IWNB firstly trains a naive Bayes using the labeled instances only. And the trained naive Bayes is used to estimate the class membership probabilities of the unlabeled instances. Then, the estimated class membership probabilities are used to label and weight unlabeled instances. At last, a naive Bayes is trained again using both the originally labeled data and the (newly labeled and weighted) unlabeled data. Our experimental results based on a large number of UCI data sets show that IWNB often improves the classification accuracy of original naive Bayes when available labeled data are very limited.  相似文献   

16.
We propose a probabilistic framework for classifier combination, which gives rigorous optimality conditions (minimum classification error) for four combination methods: majority vote, weighted majority vote, recall combiner and the naive Bayes combiner. The framework is based on two assumptions: class-conditional independence of the classifier outputs and an assumption about the individual accuracies. The four combiners are derived subsequently from one another, by progressively relaxing and then eliminating the second assumption. In parallel, the number of the trainable parameters increases from one combiner to the next. Simulation studies reveal that if the parameter estimates are accurate and the first assumption is satisfied, the order of preference of the combiners is: naive Bayes, recall, weighted majority and majority. By inducing label noise, we expose a caveat coming from the stability-plasticity dilemma. Experimental results with 73 benchmark data sets reveal that there is no definitive best combiner among the four candidates, giving a slight preference to naive Bayes. This combiner was better for problems with a large number of fairly balanced classes while weighted majority vote was better for problems with a small number of unbalanced classes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号