首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper focuses on improving decision tree induction algorithms when a kind of tie appears during the rule generation procedure for specific training datasets. The tie occurs when there are equal proportions of the target class outcome in the leaf node's records that leads to a situation where majority voting cannot be applied. To solve the above mentioned exception, we propose to base the prediction of the result on the naive Bayes(NB)estimate, k-nearest neighbour(k-NN)and association rule mining(ARM). The other features used for splitting the parent nodes are also taken into consideration.  相似文献   

2.
We propose a method for hierarchical clustering based on the decision tree approach. As in the case of supervised decision tree, the unsupervised decision tree is interpretable in terms of rules, i.e., each leaf node represents a cluster, and the path from the root node to a leaf node represents a rule. The branching decision at each node of the tree is made based on the clustering tendency of the data available at the node. We present four different measures for selecting the most appropriate attribute to be used for splitting the data at every branching node (or decision node), and two different algorithms for splitting the data at each decision node. We provide a theoretical basis for the approach and demonstrate the capability of the unsupervised decision tree for segmenting various data sets. We also compare the performance of the unsupervised decision tree with that of the supervised one.  相似文献   

3.
Most decision‐tree induction algorithms are using a local greedy strategy, where a leaf is always split on the best attribute according to a given attribute‐selection criterion. A more accurate model could possibly be found by looking ahead for alternative subtrees. However, some researchers argue that the look‐ahead should not be used due to a negative effect (called “decision‐tree pathology”) on the decision‐tree accuracy. This paper presents a new look‐ahead heuristics for decision‐tree induction. The proposed method is called look‐ahead J48 ( LA‐J48) as it is based on J48, the Weka implementation of the popular C4.5 algorithm. At each tree node, the LA‐J48 algorithm applies the look‐ahead procedure of bounded depth only to attributes that are not statistically distinguishable from the best attribute chosen by the greedy approach of C4.5. A bootstrap process is used for estimating the standard deviation of splitting criteria with unknown probability distribution. Based on a separate validation set, the attribute producing the most accurate subtree is chosen for the next step of the algorithm. In experiments on 20 benchmark data sets, the proposed look‐ahead method outperforms the greedy J48 algorithm with the gain ratio and the gini index splitting criteria, thus avoiding the look‐ahead pathology of decision‐tree induction.  相似文献   

4.
决策树是归纳学习和数据挖掘的重要方法,主要用于分类和预测。文章引入了广义决策树的概念,实现了分类规则集和决策树结构的统一。同时,提出一种新颖的基于DNA编码遗传算法构造决策树的方法。先用C4.5算法对数据集进行分类得到初始规则集,再通过文章中算法优化规则集并由此构建决策树。实验证明了该方法有效地避免了传统决策树构建过程的缺点,且有较好的并行性。  相似文献   

5.
This paper proposes a method to construct a fuzzy rule-based classifier system from an ID3-type decision tree (DT) for real data. The three major steps are rule extraction, gradient descent tuning of the rule-base, and performance-based pruning of the rule-base. Pruning removes all rules which cannot meet a certain level of performance. To test our scheme, we have used the DT generated by RIB3, an ID3-type classifier for real data. In this process, we made some improvements of RID3 to get a tree with less redundancy and hence a smaller rule-base. The rule-base is tested on several data sets and is found to demonstrate an excellent performance. Results obtained by the proposed scheme are consistently better than C4.5 across several data sets.  相似文献   

6.
As we know, learning in real world is interactive, incremental and dynamical in multiple dimensions, where new data could be appeared at anytime from anywhere and of any type. Therefore, incremental learning is of more and more importance in real world data mining scenarios. Decision trees, due to their characteristics, have been widely used for incremental learning. In this paper, we propose a novel incremental decision tree algorithm based on rough set theory. To improve the computation efficiency of our algorithm, when a new instance arrives, according to the given decision tree adaptation strategies, the algorithm will only modify some existing leaf node in the currently active decision tree or add a new leaf node to the tree, which can avoid the high time complexity of the traditional incremental methods for rebuilding decision trees too many times. Moreover, the rough set based attribute reduction method is used to filter out the redundant attributes from the original set of attributes. And we adopt the two basic notions of rough sets: significance of attributes and dependency of attributes, as the heuristic information for the selection of splitting attributes. Finally, we apply the proposed algorithm to intrusion detection. The experimental results demonstrate that our algorithm can provide competitive solutions to incremental learning.  相似文献   

7.
一种两阶段决策树建树方法及其应用   总被引:2,自引:0,他引:2  
提出一种新颖的两阶段决策树建树方法;在对数据集进行较粗的分类后,通过遗传算法寻找规则集来建立决策树叶子节点.该方法可以同时对多个属性进行度量,并避免了决策树的剪枝过程。  相似文献   

8.
In this paper, we present a new algorithm for learning oblique decision trees. Most of the current decision tree algorithms rely on impurity measures to assess the goodness of hyperplanes at each node while learning a decision tree in top-down fashion. These impurity measures do not properly capture the geometric structures in the data. Motivated by this, our algorithm uses a strategy for assessing the hyperplanes in such a way that the geometric structure in the data is taken into account. At each node of the decision tree, we find the clustering hyperplanes for both the classes and use their angle bisectors as the split rule at that node. We show through empirical studies that this idea leads to small decision trees and better performance. We also present some analysis to show that the angle bisectors of clustering hyperplanes that we use as the split rules at each node are solutions of an interesting optimization problem and hence argue that this is a principled method of learning a decision tree.  相似文献   

9.
This paper presents a new architecture of a fuzzy decision tree based on fuzzy rules – fuzzy rule based decision tree (FRDT) and provides a learning algorithm. In contrast with “traditional” axis-parallel decision trees in which only a single feature (variable) is taken into account at each node, the node of the proposed decision trees involves a fuzzy rule which involves multiple features. Fuzzy rules are employed to produce leaves of high purity. Using multiple features for a node helps us minimize the size of the trees. The growth of the FRDT is realized by expanding an additional node composed of a mixture of data coming from different classes, which is the only non-leaf node of each layer. This gives rise to a new geometric structure endowed with linguistic terms which are quite different from the “traditional” oblique decision trees endowed with hyperplanes as decision functions. A series of numeric studies are reported using data coming from UCI machine learning data sets. The comparison is carried out with regard to “traditional” decision trees such as C4.5, LADtree, BFTree, SimpleCart, and NBTree. The results of statistical tests have shown that the proposed FRDT exhibits the best performance in terms of both accuracy and the size of the produced trees.  相似文献   

10.
Classifiability-based omnivariate decision trees   总被引:1,自引:0,他引:1  
Top-down induction of decision trees is a simple and powerful method of pattern classification. In a decision tree, each node partitions the available patterns into two or more sets. New nodes are created to handle each of the resulting partitions and the process continues. A node is considered terminal if it satisfies some stopping criteria (for example, purity, i.e., all patterns at the node are from a single class). Decision trees may be univariate, linear multivariate, or nonlinear multivariate depending on whether a single attribute, a linear function of all the attributes, or a nonlinear function of all the attributes is used for the partitioning at each node of the decision tree. Though nonlinear multivariate decision trees are the most powerful, they are more susceptible to the risks of overfitting. In this paper, we propose to perform model selection at each decision node to build omnivariate decision trees. The model selection is done using a novel classifiability measure that captures the possible sources of misclassification with relative ease and is able to accurately reflect the complexity of the subproblem at each node. The proposed approach is fast and does not suffer from as high a computational burden as that incurred by typical model selection algorithms. Empirical results over 26 data sets indicate that our approach is faster and achieves better classification accuracy compared to statistical model select algorithms.  相似文献   

11.
Hybrid decision tree   总被引:6,自引:0,他引:6  
  相似文献   

12.
一种集成数据挖掘的自动视频分类方法   总被引:1,自引:0,他引:1  
针对自动视频分类工作中分类预测精度低的问题,提出了一种集成数据挖掘技术的自动视频分类方法。首先进行视频分割,形成了一个视频属性数据库;然后分别使用决策树、分类关联规则等技术对视频属性数据库进行数据挖掘,提取出决策树分类规则集和分类关联规则集;最后利用一个规则集的合并裁减算法来合并这两个分类预测规则集,形成最终的具有更高精度的视频分类规则集。通过实验验证了决策树分类预测规则和分类关联规则具有分类预测的一致性;同时实验表明,使用合并后的规则集比单独使用一个规则集来预测视频具有更高的预测准确率。  相似文献   

13.
14.
Decision trees have been widely used in data mining and machine learning as a comprehensible knowledge representation. While ant colony optimization (ACO) algorithms have been successfully applied to extract classification rules, decision tree induction with ACO algorithms remains an almost unexplored research area. In this paper we propose a novel ACO algorithm to induce decision trees, combining commonly used strategies from both traditional decision tree induction algorithms and ACO. The proposed algorithm is compared against three decision tree induction algorithms, namely C4.5, CART and cACDT, in 22 publicly available data sets. The results show that the predictive accuracy of the proposed algorithm is statistically significantly higher than the accuracy of both C4.5 and CART, which are well-known conventional algorithms for decision tree induction, and the accuracy of the ACO-based cACDT decision tree algorithm.  相似文献   

15.
传统决策树通过对特征空间的递归划分寻找决策边界,给出特征空间的“硬”划分。但对于处理大数据和复杂模式问题时,这种精确决策边界降低了决策树的泛化能力。为了让决策树算法获得对不精确知识的自动获取,把模糊理论引进了决策树,并在建树过程中,引入神经网络作为决策树叶节点,提出了一种基于神经网络的模糊决策树改进算法。在神经网络模糊决策树中,分类器学习包含两个阶段:第一阶段采用不确定性降低的启发式算法对大数据进行划分,直到节点划分能力低于真实度阈值[ε]停止模糊决策树的增长;第二阶段对该模糊决策树叶节点利用神经网络做具有泛化能力的分类。实验结果表明,相较于传统的分类学习算法,该算法准确率高,对识别大数据和复杂模式的分类问题能够通过结构自适应确定决策树规模。  相似文献   

16.
数据预处理是提高挖掘过程精度和性能的关键。文章在分析决策树算法和滑坡数据属性值特点基础上,利用聚类将连续属性值划分区间,提出了一种针对滑坡数据连续属性值离散化的方法,通过实验,新方法构造的决策树比原算法的分类正确率高,规则冗余少。  相似文献   

17.
噪声数据降低了多变量决策树的生成效率和模型质量,目前主要采用针对叶节点的剪枝策略来消除噪声数据的影响,而对决策树生成过程中的噪声干扰问题却没有给予关注。为改变这种状况,将基本粗糙集(rough set,RS)理论中相对核的概念推广到变精度粗糙集(variable precision roughset,VPRS)理论中,并利用其进行决策树初始变量选择;将两个等价关系相对泛化的概念推广为两个等价关系多数包含情况下的相对泛化,并利用其进行决策树初始属性检验;进而给出一种能够有效消除噪声数据干扰的多变量决策树构造算法。最后,采用实例验证了算法的有效性。  相似文献   

18.
基于关系数据分析的决策森林学习方法   总被引:1,自引:0,他引:1  
模式识别中的多分类器集成日益得到研究人员的关注并成为研究的热点。提出一种基于决策森林构造的多重子模型集成方法,通过对每个样本赋予决策规则,构造决策森林而非单个决策树以自动确定相对独立的样本子集,在此基础上结合条件独立性假设进行模型集成。整个学习过程不需要任何人为参与,能够自适应确定决策树数量和每个子树结构,发挥各分类器在不同样本和不同区域上的分类优势。在UCI机器学习数据集上的实验结果和样例分析验证了方法的有效性。  相似文献   

19.
20.
在数据挖掘中,分期是一个很重要的问题,有很多流行的分类器可以创建决策树木产生类模型。本文介绍了通过信息增益或熵的比较来构造一棵决策树的数桩挖掘算法思想,给出了用粗糙集理论构造决策树的一种方法,并用曲面造型方面的实例说明了决策树的生成过程。通过与ID3方法的比较,该种方法可以降低决策树的复杂性,优化决策树的结构,能挖掘较好的规则信息。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号