首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 125 毫秒
1.
贝叶斯网络分类器近似学习算法   总被引:1,自引:1,他引:0       下载免费PDF全文
贝叶斯网络在很多领域应用广泛,作为分类器更是一种有效的常用分类方法,它有着很高复杂度,这使得贝叶斯网络分类器在应用中受到诸多限制。通过对贝叶斯网络分类器算法的近似处理,可以有效减少计算量,并且得到令人满意的分类准确率。通过分析一种将判别式算法变为产生式算法的近似方法,介绍了这种算法的近似过程,并将其应用在了贝叶斯网分类算法中。接着对该算法进行分析,利用该算法的稳定性特点,提出Bagging-aCLL 集成分类算法,它进一步提高了该近似算法的分类精度。最后通过实验确定了该算法在分类准确率上确有不错的表现。  相似文献   

2.
文中首先分析降噪集成算法采用的样本置信度度量函数的性质,阐述此函数不适合处理多类问题的根源。进而设计更有针对性的置信度度量函数,并基于此函数提出一种增强型降噪参数集成算法。从而使鉴别式贝叶斯网络参数学习算法不但有效地抑止噪声影响,而且避免分类器的过度拟合,进一步拓展采用集群式学习算法的鉴别式贝叶斯网络分类器在多类问题上的应用。最后,实验结果及其统计假设检验分析充分验证此算法比目前的集群式贝叶斯网络参数学习方法得到的分类器在性能上有较显著提高。  相似文献   

3.
为克服IQ算法在处理贝叶斯网络分类器(Bayesian Network Classifier,BNC)结构学习中要求先指定适合节点次序的缺点,提出GA-K2算法,将基于选择性集成的整数编码遗传算法引入到K2算法中,使之能得到最佳节点次序并且网络结构收敛到全局最优.构建贝叶斯网络分类器进行分类,实验结果表明GA-K2算法优于随意指定节点顺序的IQ算法.  相似文献   

4.
王中锋  王志海 《计算机学报》2012,35(2):2364-2374
通常基于鉴别式学习策略训练的贝叶斯网络分类器有较高的精度,但在具有冗余边的网络结构之上鉴别式参数学习算法的性能受到一定的限制.为了在实际应用中进一步提高贝叶斯网络分类器的分类精度,该文定量描述了网络结构与真实数据变量分布之间的关系,提出了一种不存在冗余边的森林型贝叶斯网络分类器及其相应的FAN学习算法(Forest-Augmented Naive Bayes Algorithm),FAN算法能够利用对数条件似然函数的偏导数来优化网络结构学习.实验结果表明常用的限制性贝叶斯网络分类器通常存在一些冗余边,其往往会降低鉴别式参数学习算法的性能;森林型贝叶斯网络分类器减少了结构中的冗余边,更加适合于采用鉴别式学习策略训练参数;应用条件对数似然函数偏导数的FAN算法在大多数实验数据集合上提高了分类精度.  相似文献   

5.
基于关联规则的贝叶斯网络分类器   总被引:1,自引:0,他引:1  
关联规则分类器(CBA)利用关联规则来构造分类算法,但其没有考虑分类问题中的不确定性.提出一种基于关联规则的贝叶斯网络分类算法.该算法利用关联规则挖掘算法提取初始的候选网络边集,通过贪心算法学习网络结构,得到比经典的贝叶斯网络分类器TAN更好的拓扑结构.通过在15个UCI数据集上的实验结果表明,该算法取得了比TAN,CBA更好的分类性能.  相似文献   

6.
产生式方法和判别式方法是解决分类问题的两种不同框架,具有各自的优势。为利用两种方法各自的优势,文中提出一种产生式与判别式线性混合分类模型,并设计一种基于遗传算法的产生式与判别式线性混合分类模型的学习算法。该算法将线性混合分类器混合参数的学习看作一个最优化问题,以两个基分类器对每个训练数据的后验概率值为数据依据,用遗传算法找出线性混合分类器混合参数的最优值。实验结果表明,在大多数数据集上,产生式与判别式线性混合分类器的分类准确率优于或近似于它的两个基分类器中的优者。  相似文献   

7.
朴素贝叶斯分类器难以获得大量有类标签的训练集,而且传统的贝叶斯分类方法在有新的训练样本加入时,需要重新学习已学习过的样本,耗费大量时间。为此引入增量学习方法,在此基础上提出了属性加权朴素贝叶斯算法,该算法通过属性加权来提高朴素贝叶斯分类器的性能,加权参数直接从训练数据中学习得到。通过由Weka推荐的UCI数据集的实验结果表明,该算法是可行的和有效的。  相似文献   

8.
王影  王浩  俞奎  姚宏亮 《计算机科学》2012,39(1):185-189
目前基于节点排序的贝叶斯网络分类器忽略了节点序列中已选变量和类标签之间的信息,导致分类器的准确率很难进一步提高。针对这个问题,提出了一种简单高效的贝叶斯网络分类器的学习算法:L1正则化的贝叶斯网络分类器(L1-BNC)。通过调整Lasso方法中的约束值,充分利用回归残差的信息,结合点序列中已选变量和类标签的信息,形成一条优秀的有序变量拓扑序列(L1正则化路径);基于该序列,利用K2算法生成优良的贝叶斯网络分类器。实验表明,L1-BNC在分类精度上优于已有的贝叶斯网络分类器。L1-BNC也与SVM,KNN和J48分类算法进行了比较,在大部分数据集上,L1-BNC优于这些算法。  相似文献   

9.
当前已有的数据流分类模型都需要大量已标记样本来进行训练,但在实际应用中,对大量样本标记的成本相对较高。针对此问题,提出了一种基于半监督学习的数据流混合集成分类算法SMEClass,选用混合模式来组织基础分类器,用K个决策树分类器投票表决为未标记数据添加标记,以提高数据类标的置信度,增强集成分类器的准确度,同时加入一个贝叶斯分类器来有效减少标记过程中产生的噪音数据。实验结果显示,SMEClass算法与最新基于半监督学习的集成分类算法相比,其准确率有所提高,在运行时间和抗噪能力方面有明显优势。  相似文献   

10.
邓丽  金立左  费敏锐 《计算机工程》2011,37(22):281-283
小样本问题会制约贝叶斯相关反馈算法的学习能力。为此,提出一种基于半监督学习的视频检索贝叶斯相关反馈算法,其中一个分类器用于估计视频库中每一个镜头属于目标镜头的概率,另一个半监督学习分类器用于判断用户未标记镜头是否与目标相关,由此扩大贝叶斯学习器的训练数据集,提高其分类能力。实验结果表明,该算法提高了贝叶斯算法的检索性能。  相似文献   

11.
集成分类通过将若干个弱分类器依据某种规则进行组合,能有效改善分类性能。在组合过程中,各个弱分类器对分类结果的重要程度往往不一样。极限学习机是最近提出的一个新的训练单隐层前馈神经网络的学习算法。以极限学习机为基分类器,提出了一个基于差分进化的极限学习机加权集成方法。提出的方法通过差分进化算法来优化集成方法中各个基分类器的权值。实验结果表明,该方法与基于简单投票集成方法和基于Adaboost集成方法相比,具有较高的分类准确性和较好的泛化能力。  相似文献   

12.
为解决传统核极限学习机算法参数优化困难的问题,提高分类准确度,提出一种改进贝叶斯优化的核极限学习机算法。用樽海鞘群设计贝叶斯优化框架中获取函数的下置信界策略,提高算法的局部搜索能力和寻优能力;用这种改进的贝叶斯优化算法对核极限学习机的参数进行寻优,用最优参数构造核极限学习机分类器。在UCI真实数据集上进行仿真实验,实验结果表明,相比传统贝叶斯优化算法,所提算法能提升核极限学习机的分类精度,相较其它优化算法,所提算法可行有效。  相似文献   

13.
Identifying a discriminative feature can effectively improve the classification performance of aerial scene classification. Deep convolutional neural networks (DCNN) have been widely used in aerial scene classification for its learning discriminative feature ability. The DCNN feature can be more discriminative by optimizing the training loss function and using transfer learning methods. To enhance the discriminative power of a DCNN feature, the improved loss functions of pretraining models are combined with a softmax loss function and a centre loss function. To further improve performance, in this article, we propose hybrid DCNN features for aerial scene classification. First, we use DCNN models with joint loss functions and transfer learning from pretrained deep DCNN models. Second, the dense DCNN features are extracted, and the discriminative hybrid features are created using linear connection. Finally, an ensemble extreme learning machine (EELM) classifier is adopted for classification due to its general superiority and low computational cost. Experimental results based on the three public benchmark data sets demonstrate that the hybrid features obtained using the proposed approach and classified by the EELM classifier can result in remarkable performance.  相似文献   

14.
Boosted Bayesian network classifiers   总被引:2,自引:0,他引:2  
The use of Bayesian networks for classification problems has received a significant amount of recent attention. Although computationally efficient, the standard maximum likelihood learning method tends to be suboptimal due to the mismatch between its optimization criteria (data likelihood) and the actual goal of classification (label prediction accuracy). Recent approaches to optimizing classification performance during parameter or structure learning show promise, but lack the favorable computational properties of maximum likelihood learning. In this paper we present boosted Bayesian network classifiers, a framework to combine discriminative data-weighting with generative training of intermediate models. We show that boosted Bayesian network classifiers encompass the basic generative models in isolation, but improve their classification performance when the model structure is suboptimal. We also demonstrate that structure learning is beneficial in the construction of boosted Bayesian network classifiers. On a large suite of benchmark data-sets, this approach outperforms generative graphical models such as naive Bayes and TAN in classification accuracy. Boosted Bayesian network classifiers have comparable or better performance in comparison to other discriminatively trained graphical models including ELR and BNC. Furthermore, boosted Bayesian networks require significantly less training time than the ELR and BNC algorithms.  相似文献   

15.
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.  相似文献   

16.
In character string recognition integrating segmentation and classification, high classification accuracy and resistance to noncharacters are desired to the underlying classifier. In a previous evaluation study, the modified quadratic discriminant function (MQDF) proposed by Kimura et al. was shown to be superior in noncharacter resistance but inferior in classification accuracy to neural networks. This paper proposes a discriminative learning algorithm to optimize the parameters of MQDF with aim to improve the classification accuracy while preserving the superior noncharacter resistance. We refer to the resulting classifier as discriminative learning QDF (DLQDF). The parameters of DLQDF adhere to the structure of MQDF under the Gaussian density assumption and are optimized under the minimum classification error (MCE) criterion. The promise of DLQDF is justified in handwritten digit recognition and numeral string recognition, where the performance of DLQDF is comparable to or superior to that of neural classifiers. The results are also competitive to the best ones reported in the literature.  相似文献   

17.
研究表明,端学习机和判别性字典学习算法在图像分类领域极具有高效和准确的优势。然而,这两种方法也具有各自的缺点,极端学习机对噪声的鲁棒性较差,判别性字典学习算法在分类过程中耗时较长。为统一这种互补性以提高分类性能,文中提出了一种融合极端学习机的判别性分析字典学习模型。该模型利用迭代优化算法学习最优的判别性分析字典和极端学习机分类器。为验证所提算法的有效性,利用人脸数据集进行分类。实验结果表明,与目前较为流行的字典学习算法和极端学习机相比,所提算法在分类过程中具有更好的效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号