首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
模糊决策树算法与清晰决策树算法的比较研究   总被引:10,自引:2,他引:10  
ID3算法是一种典型的决策树归纳算法,这种算法在假定示例的属性值和分类值是确定的前提下,使用信息熵作为启发式建立一棵清晰的决策树。针对现实世界中存在的不确定性,人们提出了另一种决策树归纳算法,即模糊决策树算法,它是清晰决策树算法的一种推广。这两种算法在实际应用中各有自己的优劣之处,针对一个具体问题的知识获取过程,选取哪一种算法目前还没有一个较明确的依据。该文从5个方面对这两种算法进行了详细的比较,指出了属性为连续值时这两种算法的异同及优缺点,其目的是在为解决具体问题时怎样选择这两种算法提供一些有用的线索。  相似文献   

2.
决策树算法及其在乳腺疾病图像数据挖掘中的应用   总被引:5,自引:1,他引:5  
介绍了ID3决策树算法建立决策树的基本原理,着重介绍了决策树的修剪问题和两种典型的修剪算法-减少分类错误修剪算法和最小代价-复杂度修剪算法,并利用介绍的决策树算法和修剪算法对乳腺疾病图像进行数据挖掘,得到了一些有实际参考价值的规则,获得了很高的分类准确率,证明了决策树算法在医学图像数据挖掘领域有着广泛的应用前景。  相似文献   

3.
This paper presents a novel host-based combinatorial method based on k-Means clustering and ID3 decision tree learning algorithms for unsupervised classification of anomalous and normal activities in computer network ARP traffic. The k-Means clustering method is first applied to the normal training instances to partition it into k clusters using Euclidean distance similarity. An ID3 decision tree is constructed on each cluster. Anomaly scores from the k-Means clustering algorithm and decisions of the ID3 decision trees are extracted. A special algorithm is used to combine results of the two algorithms and obtain final anomaly score values. The threshold rule is applied for making the decision on the test instance normality. Experiments are performed on captured network ARP traffic. Some anomaly criteria has been defined and applied to the captured ARP traffic to generate normal training instances. Performance of the proposed approach is evaluated using five defined measures and empirically compared with the performance of individual k-Means clustering and ID3 decision tree classification algorithms and the other proposed approaches based on Markovian chains and stochastic learning automata. Experimental results show that the proposed approach has specificity and positive predictive value of as high as 96 and 98%, respectively.  相似文献   

4.
Fully polarimetric synthetic aperture radar (PolSAR) Earth Observations showed great potential for mapping and monitoring agro-environmental systems. Numerous polarimetric features can be extracted from these complex observations which may lead to improve accuracy of land-cover classification and object characterization. This article employed two well-known decision tree ensembles, i.e. bagged tree (BT) and random forest (RF), for land-cover mapping from PolSAR imagery. Moreover, two fast modified decision tree ensembles were proposed in this article, namely balanced filter-based forest (BFF) and cost-sensitive filter-based forest (CFF). These algorithms, designed based on the idea of RF, use a fast filter feature selection algorithms and two extended majority voting. They are also able to embed some solutions of imbalanced data problem into their structures. Three different PolSAR datasets, with imbalanced data, were used for evaluating efficiency of the proposed algorithms. The results indicated that all the tree ensembles have higher efficiency and reliability than the individual DT. Moreover, both proposed tree ensembles obtained higher mean overall accuracy (0.5–14% higher), producer’s accuracy (0.5–10% higher), and user’s accuracy (0.5–9% higher) than the classical tree ensembles, i.e. BT and RF. They were also much faster (e.g. 2–10 times) and more stable than their competitors for classification of these three datasets. In addition, unlike BT and RF, which obtained higher accuracy in large ensembles (i.e. the high number of DT), BFF and CFF can also be more efficient and reliable in smaller ensembles. Furthermore, the extended majority voting techniques could outperform the classical majority voting for decision fusion.  相似文献   

5.
决策树算法是数据挖掘中重要的分类算法。目前,已有许多构建决策树的算法,其中,ID3算法是核心算法。本文首先对ID3算法进行研究与分析,针对计算属性的信息熵十分复杂的缺点,提出了一种新的启发式算法SID3,它是基于属性对分类的敏感度的。文章最后通过实例对两种算法进行比较分析,结果表明,SID3算法能够生成正确的决策树,并且使建树过程更简便,更快速。  相似文献   

6.
Decision trees have been widely used in data mining and machine learning as a comprehensible knowledge representation. While ant colony optimization (ACO) algorithms have been successfully applied to extract classification rules, decision tree induction with ACO algorithms remains an almost unexplored research area. In this paper we propose a novel ACO algorithm to induce decision trees, combining commonly used strategies from both traditional decision tree induction algorithms and ACO. The proposed algorithm is compared against three decision tree induction algorithms, namely C4.5, CART and cACDT, in 22 publicly available data sets. The results show that the predictive accuracy of the proposed algorithm is statistically significantly higher than the accuracy of both C4.5 and CART, which are well-known conventional algorithms for decision tree induction, and the accuracy of the ACO-based cACDT decision tree algorithm.  相似文献   

7.
This paper focuses on improving decision tree induction algorithms when a kind of tie appears during the rule generation procedure for specific training datasets. The tie occurs when there are equal proportions of the target class outcome in the leaf node's records that leads to a situation where majority voting cannot be applied. To solve the above mentioned exception, we propose to base the prediction of the result on the naive Bayes(NB)estimate, k-nearest neighbour(k-NN)and association rule mining(ARM). The other features used for splitting the parent nodes are also taken into consideration.  相似文献   

8.
决策树算法是经典的分类挖掘算法之一,具有广泛的实际应用价值。经典的ID3决策树算法是内存驻留算法,只能处理小数据集,在面对海量数据集时显得无能为力。为此,对经典ID3决策树生成算法的可并行性进行了深入分析和研究,利用云计算的MapReduce编程技术,提出并实现面向海量数据的ID3决策树并行分类算法。实验结果表明该算法是有效可行的。  相似文献   

9.
机器学习中的决策树算法具有重要的数据分类功能,但基于信息增益的ID3算法与基于基尼指数的CART算法的分类功效还值得提高.构造信息增益与基尼指数的自适应集成度量,设计有效的决策树算法,以提升ID3与C A RT两类基本算法的性能.分析信息增益信息表示与基尼指数代数表示的异质无关性,采用基于知识的加权线性组合来建立信息增...  相似文献   

10.
一种新的基于属性—值对的决策树归纳算法   总被引:6,自引:1,他引:5  
决策树归纳算法ID3是实例学习中具有代表性的学习方法。文中针对ID3易偏向于值数较多属性的缺陷,提出一种新的基于属性-值对的决策树归纳算法AVPI,它所产生的决策树大小及测试速度均优于ID3。该算法应用于色彩匹配系统,取得了较好效果。  相似文献   

11.
Knowledge inference systems are built to identify hidden and logical patterns in huge data. Decision trees play a vital role in knowledge discovery but crisp decision tree algorithms have a problem with sharp decision boundaries which may not be implicated to all knowledge inference systems. A fuzzy decision tree algorithm overcomes this drawback. Fuzzy decision trees are implemented through fuzzification of the decision boundaries without disturbing the attribute values. Data reduction also plays a crucial role in many classification problems. In this research article, it presents an approach using principal component analysis and modified Gini index based fuzzy SLIQ decision tree algorithm. The PCA is used for dimensionality reduction, and modified Gini index fuzzy SLIQ decision tree algorithm to construct decision rules. Finally, through PID data set, the method is validated in the simulation experiment in MATLAB.  相似文献   

12.
This paper deals with some improvements to rule induction algorithms in order to resolve the tie that appear in special cases during the rule generation procedure for specific training data sets. These improvements are demonstrated by experimental results on various data sets. The tie occurs in decision tree induction algorithm when the class prediction at a leaf node cannot be determined by majority voting. When there is a conflict in the leaf node, we need to find the source and the solution to the problem. In this paper, we propose to calculate the Influence factor for each attribute and an update procedure to the decision tree has been suggested to deal with the problem and provide subsequent rectification steps.  相似文献   

13.
Zhang  Hongpo  Cheng  Ning  Zhang  Yang  Li  Zhanbo 《Applied Intelligence》2021,51(7):4503-4514

Label flipping attack is a poisoning attack that flips the labels of training samples to reduce the classification performance of the model. Robustness is used to measure the applicability of machine learning algorithms to adversarial attack. Naive Bayes (NB) algorithm is a anti-noise and robust machine learning technique. It shows good robustness when dealing with issues such as document classification and spam filtering. Here we propose two novel label flipping attacks to evaluate the robustness of NB under label noise. For the three datasets of Spambase, TREC 2006c and TREC 2007 in the spam classification domain, our attack goal is to increase the false negative rate of NB under the influence of label noise without affecting normal mail classification. Our evaluation shows that at a noise level of 20%, the false negative rate of Spambase and TREC 2006c has increased by about 20%, and the test error of the TREC 2007 dataset has increased to nearly 30%. We compared the classification accuracy of five classic machine learning algorithms (random forest(RF), support vector machine(SVM), decision tree(DT), logistic regression(LR), and NB) and two deep learning models(AlexNet, LeNet) under the proposed label flipping attacks. The experimental results show that two label noises are suitable for various classification models and effectively reduce the accuracy of the models.

  相似文献   

14.
Ying  Dengsheng  Guojun   《Pattern recognition》2008,41(8):2554-2570
Semantic-based image retrieval has attracted great interest in recent years. This paper proposes a region-based image retrieval system with high-level semantic learning. The key features of the system are: (1) it supports both query by keyword and query by region of interest. The system segments an image into different regions and extracts low-level features of each region. From these features, high-level concepts are obtained using a proposed decision tree-based learning algorithm named DT-ST. During retrieval, a set of images whose semantic concept matches the query is returned. Experiments on a standard real-world image database confirm that the proposed system significantly improves the retrieval performance, compared with a conventional content-based image retrieval system. (2) The proposed decision tree induction method DT-ST for image semantic learning is different from other decision tree induction algorithms in that it makes use of the semantic templates to discretize continuous-valued region features and avoids the difficult image feature discretization problem. Furthermore, it introduces a hybrid tree simplification method to handle the noise and tree fragmentation problems, thereby improving the classification performance of the tree. Experimental results indicate that DT-ST outperforms two well-established decision tree induction algorithms ID3 and C4.5 in image semantic learning.  相似文献   

15.
一种新的决策树归纳学习算法   总被引:79,自引:1,他引:79  
本文不示例学习的重要分枝--决策树归纳学习进行了分析探讨,从示例学习最优化的角度分析了决策树归纳学习的优化原则,指出了以往的以ID3为代表的归纳学习算法所固有的缺陷,并提出了一种新的基于概率的决策树归纳学习算法PID,PID在扩展属性的选择上仍采用基于信息增益率的方法,但在树上的扩展过程中,采用属性聚类的方法进行树的支合并。PID得到的决策树在树的规模和分类精度上都优于ID3。  相似文献   

16.
模糊决策树归纳是从具有模糊表示的示例中学习规则的一种重要方法,从符号值属性类分明的数据中提取规则可视为模糊决策树归纳的一种特殊情况。由于构建最优的模糊决策树是NP-hard,因此,针对启发式算法的研究是非常必要的。该文主要对两种启发式算法即FuzzyID3和Min-Ambiguity算法应用于符号值属性并且类分明情况所作的分析比较。通过实验与理论分析,发现FuzzyID3算法应用于符号值属性类分明的数据库时从训练准确度、测试准确度和树的规模等方面都要优于Min-Ambiguity算法。  相似文献   

17.
决策树是数据挖掘的分类应用中采用最广泛的模型之一,但是传统的ID3、C4.5和CART等算法在应用于超大型数据库的挖掘时,有效性会降得很低,甚至出现内存溢出的现象,针对此本文提出了一种基于属性加权的随机决策树算法,并通过实验证明该算法减少了对系统资源的占用,并且对高维的大数据集具有很高的分类准确率,非常适合被用于入侵检测的分类之中。  相似文献   

18.
In traditional decision (classification) tree algorithms, the label is assumed to be a categorical (class) variable. When the label is a continuous variable in the data, two possible approaches based on existing decision tree algorithms can be used to handle the situations. The first uses a data discretization method in the preprocessing stage to convert the continuous label into a class label defined by a finite set of nonoverlapping intervals and then applies a decision tree algorithm. The second simply applies a regression tree algorithm, using the continuous label directly. These approaches have their own drawbacks. We propose an algorithm that dynamically discretizes the continuous label at each node during the tree induction process. Extensive experiments show that the proposed method outperforms the preprocessing approach, the regression tree approach, and several nontree-based algorithms.  相似文献   

19.
经典ID3决策树算法适用于离散型数据分类,但用于连续处理时需要数据离散化容易导致信息损失。提出邻域等价关系从而诱导邻域ID3(NID3)决策树算法,NID3算法改进了ID3决策树算法,能够直接实施连续预测并获取更好的分类效果。在邻域决策系统中,挖掘一种邻域等价关系;基于邻域等价粒化,构建邻域信息度量;基于邻域信息增益,设计NID3决策树算法。实例分析与数据实验均表明,NID3算法具有连续数据分类预测有效性,在分类机器学习中优于ID3算法。  相似文献   

20.
Learning decision tree for ranking   总被引:4,自引:3,他引:1  
Decision tree is one of the most effective and widely used methods for classification. However, many real-world applications require instances to be ranked by the probability of class membership. The area under the receiver operating characteristics curve, simply AUC, has been recently used as a measure for ranking performance of learning algorithms. In this paper, we present two novel class probability estimation algorithms to improve the ranking performance of decision tree. Instead of estimating the probability of class membership using simple voting at the leaf where the test instance falls into, our algorithms use similarity-weighted voting and naive Bayes. We design empirical experiments to verify that our new algorithms significantly outperform the recent decision tree ranking algorithm C4.4 in terms of AUC.
Liangxiao JiangEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号