首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 0 毫秒
1.
有序分类是现实生活中广泛存在的一种分类问题。基于排序熵的有序决策树算法是处理有序分类问题的重要方法之一,这种方法是以排序互信息作为启发式来构建有序决策树。基于这项工作,通过引入模糊有序熵,并以模糊有序互信息作为启发式构建模糊有序决策树,对有序决策树进行了扩展。这两种算法在实际应用中各有自己的优劣之处,从四个方面对这两种算法进行了详细的比较,并指出了这两种算法的异同及优缺点。  相似文献   

2.
传统决策树通过对特征空间的递归划分寻找决策边界,给出特征空间的“硬”划分。但对于处理大数据和复杂模式问题时,这种精确决策边界降低了决策树的泛化能力。为了让决策树算法获得对不精确知识的自动获取,把模糊理论引进了决策树,并在建树过程中,引入神经网络作为决策树叶节点,提出了一种基于神经网络的模糊决策树改进算法。在神经网络模糊决策树中,分类器学习包含两个阶段:第一阶段采用不确定性降低的启发式算法对大数据进行划分,直到节点划分能力低于真实度阈值[ε]停止模糊决策树的增长;第二阶段对该模糊决策树叶节点利用神经网络做具有泛化能力的分类。实验结果表明,相较于传统的分类学习算法,该算法准确率高,对识别大数据和复杂模式的分类问题能够通过结构自适应确定决策树规模。  相似文献   

3.
目前存在的一些区间值属性决策树算法都是在无序情况下设计的,未考虑条件属性和决策属性之间的序关系.针对这些算法处理有序分类问题的不足,提出区间值属性的单调决策树算法,用于处理区间值属性的单调分类问题.该算法利用可能度确定区间值属性的序关系,使用排序互信息度量区间值属性的单调一致程度,通过排序互信息的最大化选取扩展属性.此外,将非平衡割点应用到区间值属性决策树构建过程中,减少排序互信息的计算次数,提高计算效率.实验表明文中算法提高了效率和测试精度.  相似文献   

4.
为优化针对非均衡数据的分类效果,结合犹豫模糊集理论与决策树算法,提出一种改进的模糊决策树算法。通过SMOTE算法对非均衡数据进行过采样处理,使用K-means聚类方法获得各属性的聚类中心点,利用2种不同的隶属度函数对数据集进行模糊化处理。在此基础上,根据隶属度函数和犹豫模糊集的信息能量求得各属性的犹豫模糊信息增益,选取最大值替代Fuzzy ID3算法中的模糊信息增益作为属性的分裂准则,构建一个用于非均衡数据分类的犹豫模糊决策树模型。实验结果表明,基于犹豫模糊决策树的分类器在AUC评价指标上相对于C4.5、KNN、随机森林等传统分类算法平均提高了12.6%。  相似文献   

5.
基于粒计算的决策树并行算法的应用   总被引:1,自引:0,他引:1  
针对传统的决策树分类算法不能有效解决海量数据挖掘的问题,结合并行处理模型M apReduce ,研究基于粒计算的ID3决策树分类的并行化处理方法。基于信息粒的二进制表示来构建属性的二进制信息粒向量,给出数据集的二进制信息粒关联矩阵表示;基于二进制信息粒关联矩阵,提出属性的信息增益的计算方法,设计基于M apReduce的粒计算决策树并行分类算法。通过使用标准数据集和实际气象领域的雷电真实数据集进行测试,验证了该算法的有效性。  相似文献   

6.
基于关联规则的决策树算法   总被引:1,自引:0,他引:1       下载免费PDF全文
汪海锐  李伟 《计算机工程》2011,37(9):104-106,109
通过将关联规则与决策树算法相结合,形成一种基于关联规则的决策树算法。该算法对不同时期同一事务的异种数据结构进行处理,得到一种可扩展的多分支分类决策树,使得改进后的决策树算法具有良好的可扩展性。该算法解决了传统分类算法在数据集维度发生变化时分类过程无法持续进行的问题。  相似文献   

7.
互信息的序决策信息系统属性约简研究   总被引:1,自引:0,他引:1  
优势关系粗糙集理论是粗糙集理论有意义的推广,决策信息系统知识约简是粗糙集理论的核心内容之一.通过在协调序决策信息系统中引入条件熵、互信息概念,给出了基于条件熵、互信息的协调序决策信息系统属性约简算法,并通过学生评价决策信息系统验证了该算法的有效性,使协调序决策信息系统的属性约简得到了扩展.在不协调序决策信息系统中引入限定条件熵、限定互信息概念,并给出基于限定互信息的不协调序决策信息系统属性约简算法,为不协调序决策信息系统的属性约简的应用提供了可行的解决方法.  相似文献   

8.
现有的多变量决策树在分类准确性与树结构复杂性两方面优于单变量决策树,但其训练时间却高于单变量决策树,使得现有的多变量决策树不适用于快速响应的分类任务.针对现有多变量决策树训练时间高的问题,提出了基于信息熵和几何轮廓相似度的多变量决策树(IEMDT).该算法利用几何轮廓相似度函数的一对一映射特性,将n维空间样本点投影到一维空间的数轴上,进而形成有序的投影点集合,然后通过类别边界和信息增益计算最优分割点集将有序投影点集合划分为多个子集,接着分别对每个子集继续投影分割,最终生成决策树.在8个数据集上的实验结果表明:IEMDT具有较低的训练时间,并且具有较高的分类准确性.  相似文献   

9.
将决策树分类算法引入寿险客户资源数据挖掘,在对原始数据进行多种预处理后,得到训练样本集,采用ID3算法定量计算训练集样本中各属性互信息,迅速建立一颗客户品质评价决策树,实现了对寿险客户群体的正确分类.  相似文献   

10.
决策树是归纳学习和数据挖掘的重要方法,主要用于分类和预测。文章引入了广义决策树的概念,实现了分类规则集和决策树结构的统一。同时,提出一种新颖的基于DNA编码遗传算法构造决策树的方法。先用C4.5算法对数据集进行分类得到初始规则集,再通过文章中算法优化规则集并由此构建决策树。实验证明了该方法有效地避免了传统决策树构建过程的缺点,且有较好的并行性。  相似文献   

11.
Knowledge inference systems are built to identify hidden and logical patterns in huge data. Decision trees play a vital role in knowledge discovery but crisp decision tree algorithms have a problem with sharp decision boundaries which may not be implicated to all knowledge inference systems. A fuzzy decision tree algorithm overcomes this drawback. Fuzzy decision trees are implemented through fuzzification of the decision boundaries without disturbing the attribute values. Data reduction also plays a crucial role in many classification problems. In this research article, it presents an approach using principal component analysis and modified Gini index based fuzzy SLIQ decision tree algorithm. The PCA is used for dimensionality reduction, and modified Gini index fuzzy SLIQ decision tree algorithm to construct decision rules. Finally, through PID data set, the method is validated in the simulation experiment in MATLAB.  相似文献   

12.
Regression problems try estimating a continuous variable from a number of characteristics or predictors. Several proposals have been made for regression models based on the use of fuzzy rules; however, all these proposals make use of rule models in which the irrelevance of the input variables in relation to the variable to be approximated is not taken into account. Regression problems share with the ordinal classification the existence of an explicit relationship of order between the values of the variable to be predicted. In a recent paper, the authors have proposed an ordinal classification algorithm that takes into account the detection of the irrelevance of input variables. This algorithm extracts a set of fuzzy rules from an example set, using as the basic model a sequential covering strategy along with a genetic algorithm. In this paper, a proposal for a regression algorithm based on this ordinal classification algorithm is presented. The proposed model can be interpreted as a multiclassifier and multilevel system that learns at each stage using the knowledge gained in previous stages. Due to similarities between regression and ordinal problems as well as the use of a set of ordinal algorithms, an error interval can be returned with the regression output value. Experimental results show the good behavior of the proposal as well as the results of the error interval.  相似文献   

13.
基于MapReduce的决策树算法并行化   总被引:1,自引:0,他引:1  
陆秋  程小辉 《计算机应用》2012,32(9):2463-2465
针对传统决策树算法不能解决海量数据挖掘以及ID3算法的多值偏向问题,设计和实现了一种基于MapReduce架构的并行决策树分类算法。该算法采用属性相似度作为测试属性的选择标准来避免ID3算法的多值偏向问题,采用MapReduce模型来解决海量数据挖掘问题。在用普通PC搭建的Hadoop集群的实验结果表明:基于MapReduce的决策树算法可以处理大规模数据的分类问题,具有较好的可扩展性,在保证分类正确率的情况下能获得接近线性的加速比。  相似文献   

14.
基于模糊软集合理论的文本分类方法   总被引:3,自引:0,他引:3  
为提高文本分类精度,提出一种基于模糊软集合理论的文本分类方法。该方法把文本训练集表示成模糊软集合表格形式,通过约简、构造软集合对照表方法找出待分类文本所属类别,并针对文本特征提取过程中由于相近特征而导致分类精度下降问题给出一种基于正则化互信息特征选择算法,有效地解决了上述问题。与传统的KNN和SVM分类算法相比,模糊软集合方法在文本分类的精度和准度上都有所提高。  相似文献   

15.
Induction of multiple fuzzy decision trees based on rough set technique   总被引:5,自引:0,他引:5  
The integration of fuzzy sets and rough sets can lead to a hybrid soft-computing technique which has been applied successfully to many fields such as machine learning, pattern recognition and image processing. The key to this soft-computing technique is how to set up and make use of the fuzzy attribute reduct in fuzzy rough set theory. Given a fuzzy information system, we may find many fuzzy attribute reducts and each of them can have different contributions to decision-making. If only one of the fuzzy attribute reducts, which may be the most important one, is selected to induce decision rules, some useful information hidden in the other reducts for the decision-making will be losing unavoidably. To sufficiently make use of the information provided by every individual fuzzy attribute reduct in a fuzzy information system, this paper presents a novel induction of multiple fuzzy decision trees based on rough set technique. The induction consists of three stages. First several fuzzy attribute reducts are found by a similarity based approach, and then a fuzzy decision tree for each fuzzy attribute reduct is generated according to the fuzzy ID3 algorithm. The fuzzy integral is finally considered as a fusion tool to integrate the generated decision trees, which combines together all outputs of the multiple fuzzy decision trees and forms the final decision result. An illustration is given to show the proposed fusion scheme. A numerical experiment on real data indicates that the proposed multiple tree induction is superior to the single tree induction based on the individual reduct or on the entire feature set for learning problems with many attributes.  相似文献   

16.
Ordinal classification plays an important role in various decision making tasks.However, little attention is paid to this type of learning tasks compared with general classification learning.Shannon information entropy and the derived measure of mutual information play a fundamental role in a number of learning algorithms including feature evaluation, selection and decision tree construction.These measures are not applicable to ordinal classification for they cannot characterize the consistency of monotonic...  相似文献   

17.
In this paper, a new classification method (SDCC) for high dimensional text data with multiple classes is proposed. In this method, a subspace decision cluster classification (SDCC) model consists of a set of disjoint subspace decision clusters, each labeled with a dominant class to determine the class of new objects falling in the cluster. A cluster tree is first generated from a training data set by recursively calling a subspace clustering algorithm Entropy Weighting k-Means algorithm. Then, the SDCC model is extracted from the subspace decision cluster tree. Various tests including Anderson–Darling test are used to determine the stopping condition of the tree growing. A series of experiments on real text data sets have been conducted. Their results show that the new classification method (SDCC) outperforms the existing methods like decision tree and SVM. SDCC is particularly suitable for large, high dimensional sparse text data with many classes.  相似文献   

18.
一种新的不平衡数据学习算法PCBoost   总被引:8,自引:0,他引:8  
现实世界中广泛存在不平衡数据,其分类问题是机器学习研究中的一个热点.多数传统分类算法假定类分布平衡或误分类代价均衡,在处理不平衡数据时,效果不够理想.文中提出一种不平衡数据分类算法-PCBoost.算法以信息增益率为分裂准则构建决策树,作为弱分类器.在每次迭代初始,利用数据合成方法添加合成的少数类样例,平衡训练信息;在子分类器形成后,修正“扰动”,删除未被正确分类的合成样例.文中讨论了数据合成方法,给出了训练误差界的理论分析,并分析了集成学习参数的选择.实验结果表明,PCBoost算法具有处理不平衡数据分类问题的优势.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号