首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Feature selection determines relevant features in the data. It is often applied in pattern classification, data mining, as well as machine learning. A special concern for feature selection nowadays is that the size of a database is normally very large, both vertically and horizontally. In addition, feature sets may grow as the data collection process continues. Effective solutions are needed to accommodate the practical demands. This paper concentrates on three issues: large number of features, large data size, and expanding feature set. For the first issue, we suggest a probabilistic algorithm to select features. For the second issue, we present a scalable probabilistic algorithm that expedites feature selection further and can scale up without sacrificing the quality of selected features. For the third issue, we propose an incremental algorithm that adapts to the newly extended feature set and captures ‘concept drifts' by removing features from previously selected and newly added ones. We expect that research on scalable feature selection will be extended to distributed and parallel computing and have impact on applications of data mining and machine learning.  相似文献   

2.
A new improved forward floating selection (IFFS) algorithm for selecting a subset of features is presented. Our proposed algorithm improves the state-of-the-art sequential forward floating selection algorithm. The improvement is to add an additional search step called “replacing the weak feature” to check whether removing any feature in the currently selected feature subset and adding a new one at each sequential step can improve the current feature subset. Our method provides the optimal or quasi-optimal (close to optimal) solutions for many selected subsets and requires significantly less computational load than optimal feature selection algorithms. Our experimental results for four different databases demonstrate that our algorithm consistently selects better subsets than other suboptimal feature selection algorithms do, especially when the original number of features of the database is large.  相似文献   

3.
Given a large set of potential features, it is usually necessary to find a small subset with which to classify. The task of finding an optimal feature set is inherently combinatoric and therefore suboptimal algorithms are typically used to find feature sets. If feature selection is based directly on classification error, then a feature-selection algorithm must base its decision on error estimates. This paper addresses the impact of error estimation on feature selection using two performance measures: comparison of the true error of the optimal feature set with the true error of the feature set found by a feature-selection algorithm, and the number of features among the truly optimal feature set that appear in the feature set found by the algorithm. The study considers seven error estimators applied to three standard suboptimal feature-selection algorithms and exhaustive search, and it considers three different feature-label model distributions. It draws two conclusions for the cases considered: (1) depending on the sample size and the classification rule, feature-selection algorithms can produce feature sets whose corresponding classifiers possess errors far in excess of the classifier corresponding to the optimal feature set; and (2) for small samples, differences in performances among the feature-selection algorithms are less significant than performance differences among the error estimators used to implement the algorithms. Moreover, keeping in mind that results depend on the particular classifier-distribution pair, for the error estimators considered in this study, bootstrap and bolstered resubstitution usually outperform cross-validation, and bolstered resubstitution usually performs as well as or better than bootstrap.  相似文献   

4.
Rough set theory is one of the effective methods to feature selection, which can preserve the meaning of the features. The essence of rough set approach to feature selection is to find a subset of the original features. Since finding a minimal subset of the features is a NP-hard problem, it is necessary to investigate effective and efficient heuristic algorithms. Ant colony optimization (ACO) has been successfully applied to many difficult combinatorial problems like quadratic assignment, traveling salesman, scheduling, etc. It is particularly attractive for feature selection since there is no heuristic information that can guide search to the optimal minimal subset every time. However, ants can discover the best feature combinations as they traverse the graph. In this paper, we propose a new rough set approach to feature selection based on ACO, which adopts mutual information based feature significance as heuristic information. A novel feature selection algorithm is also given. Jensen and Shen proposed a ACO-based feature selection approach which starts from a random feature. Our approach starts from the feature core, which changes the complete graph to a smaller one. To verify the efficiency of our algorithm, experiments are carried out on some standard UCI datasets. The results demonstrate that our algorithm can provide efficient solution to find a minimal subset of the features.  相似文献   

5.
实际应用中,数据常常表现出不完备性和动态性的特点。针对动态不完备数据中的特征选择问题,提出了一种基于相容粗糙集模型和信息熵理论的增量式特征选择方法。首先,建立了不完备信息系统中特征值动态更新时论域上条件划分与决策分类的动态更新模式,分析了作为特征重要度评价准则的不完备相容信息熵的增量计算机制,并将该机制引入到启发式最优特征子集搜索过程中特征重要度的迭代计算,进一步设计了不完备数据中面向特征值动态更新的增量式特征选择算法。最后,在标准UCI数据集上从分类精度、决策性能和计算效率3个方面对文中所提出的增量算法的有效性和高效性进行了实验验证。  相似文献   

6.
Protein function prediction is an important problem in functional genomics. Typically, protein sequences are represented by feature vectors. A major problem of protein datasets that increase the complexity of classification models is their large number of features. Feature selection (FS) techniques are used to deal with this high dimensional space of features. In this paper, we propose a novel feature selection algorithm that combines genetic algorithms (GA) and ant colony optimization (ACO) for faster and better search capability. The hybrid algorithm makes use of advantages of both ACO and GA methods. Proposed algorithm is easily implemented and because of use of a simple classifier in that, its computational complexity is very low. The performance of proposed algorithm is compared to the performance of two prominent population-based algorithms, ACO and genetic algorithms. Experimentation is carried out using two challenging biological datasets, involving the hierarchical functional classification of GPCRs and enzymes. The criteria used for comparison are maximizing predictive accuracy, and finding the smallest subset of features. The results of experiments indicate the superiority of proposed algorithm.  相似文献   

7.
《Pattern recognition letters》2001,22(6-7):799-811
Feature selection is used to improve the efficiency of learning algorithms by finding an optimal subset of features. However, most feature selection techniques can handle only certain types of data. Additional limitations of existing methods include intensive computational requirements and inability to identify redundant variables. In this paper, we present a novel, information-theoretic algorithm for feature selection, which finds an optimal set of attributes by removing both irrelevant and redundant features. The algorithm has a polynomial computational complexity and is applicable to datasets of a mixed nature. The method performance is evaluated on several benchmark datasets by using a standard classifier (C4.5).  相似文献   

8.
In classification, feature selection is an important data pre-processing technique, but it is a difficult problem due mainly to the large search space. Particle swarm optimisation (PSO) is an efficient evolutionary computation technique. However, the traditional personal best and global best updating mechanism in PSO limits its performance for feature selection and the potential of PSO for feature selection has not been fully investigated. This paper proposes three new initialisation strategies and three new personal best and global best updating mechanisms in PSO to develop novel feature selection approaches with the goals of maximising the classification performance, minimising the number of features and reducing the computational time. The proposed initialisation strategies and updating mechanisms are compared with the traditional initialisation and the traditional updating mechanism. Meanwhile, the most promising initialisation strategy and updating mechanism are combined to form a new approach (PSO(4-2)) to address feature selection problems and it is compared with two traditional feature selection methods and two PSO based methods. Experiments on twenty benchmark datasets show that PSO with the new initialisation strategies and/or the new updating mechanisms can automatically evolve a feature subset with a smaller number of features and higher classification performance than using all features. PSO(4-2) outperforms the two traditional methods and two PSO based algorithm in terms of the computational time, the number of features and the classification performance. The superior performance of this algorithm is due mainly to both the proposed initialisation strategy, which aims to take the advantages of both the forward selection and backward selection to decrease the number of features and the computational time, and the new updating mechanism, which can overcome the limitations of traditional updating mechanisms by taking the number of features into account, which reduces the number of features and the computational time.  相似文献   

9.
This paper describes a novel feature selection algorithm for unsupervised clustering, that combines the clustering ensembles method and the population based incremental learning algorithm. The main idea of the proposed unsupervised feature selection algorithm is to search for a subset of all features such that the clustering algorithm trained on this feature subset can achieve the most similar clustering solution to the one obtained by an ensemble learning algorithm. In particular, a clustering solution is firstly achieved by a clustering ensembles method, then the population based incremental learning algorithm is adopted to find the feature subset that best fits the obtained clustering solution. One advantage of the proposed unsupervised feature selection algorithm is that it is dimensionality-unbiased. In addition, the proposed unsupervised feature selection algorithm leverages the consensus across multiple clustering solutions. Experimental results on several real data sets demonstrate that the proposed unsupervised feature selection algorithm is often able to obtain a better feature subset when compared with other existing unsupervised feature selection algorithms.  相似文献   

10.
In many data mining applications that address classification problems, feature and model selection are considered as key tasks. The appropriate input features of the classifier are selected from a given set of possible features, and the structure parameters of the classifier are adapted with respect to these features and a given dataset. This paper describes the particle swarm optimization algorithm (PSO) that performs feature and model selection simultaneously for the probabilistic neural network (PNN) classifier for power system disturbances. The probabilistic neural network is one of the successful classifiers used to solve many classification problems. However, the computational effort and storage requirement of the PNN method will prohibitively increase as the number of patterns used in the training set increases. An important issue that has not been given enough attention is the selection of a “spread parameter,” also called a “smoothing parameter,” in the PNN classifier. PSO is a powerful meta-heuristic technique in the artificial intelligence field; therefore, this study proposes a PSO-based approach, called PSO-PNN, to specify the beneficial features and the value of spread parameter to enhance the performance of PNN. The experimental results indicate that the proposed PSO-based approach significantly improves the classification accuracy with the discriminating input features for PNN.  相似文献   

11.
王俊红  赵彬佳 《计算机工程》2021,47(11):100-107
不平衡分类问题广泛存在于医疗、经济等领域,对于不平衡数据集分类,特别是高维数据分类时,有效的特征选择算法至关重要。然而多数特征选择算法未考虑特征协同的影响,导致分类性能下降。对FAST特征选择算法进行改进,并考虑特征的协同作用,提出一种新的特征选择算法FSBS。运用AUC对特征进行评估,以相互增益衡量协同作用大小,选出有效特征,进而对不平衡数据进行分类。实验结果表明,该算法能有效地选择特征,尤其在特征数量较少的情况下可保持较高的分类准确率。  相似文献   

12.
Consider the problem of selecting a set of projects from a large number of available projects such that at least some specified levels of benefits of various types are realized at a minimum cost. This problem can be formulated in terms of the well-known 0–1 multi-dimensional knapsack problem, a special case of the general integer programming problems. In view of the NP-completeness of these problems, this paper proposes a polynomially bounded and efficient heuristic algorithm for its solution. The proposed algorithm proceeds as follows: an initial selection is found by prioritizing the projects according to a computed discard index. This initial selection set is then altered to reduce total costs by using project exchange operations. Computational results indicate that the proposed algorithm is quite effective in finding optimal or near optimal solutions.  相似文献   

13.
提出了一种基于遗传算法的大数据特征选择算法。该算法首先对各维度的特征进行评估,根据每个特征在同类最近邻和异类最近邻上的差异度调整其权重,基于特征权重引导遗传算法的搜索,以提升算法的搜索能力和获取特征的准确性;然后结合特征权重计算特征的适应度,以适应度作为评价指标,启动遗传算法获取最优的特征子集,并最终实现高效准确的大数据特征选择。通过实验分析发现,该算法能够有效减小分类特征数,并提升特征分类准确率。  相似文献   

14.
Feature subset selection is basically an optimization problem for choosing the most important features from various alternatives in order to facilitate classification or mining problems. Though lots of algorithms have been developed so far, none is considered to be the best for all situations and researchers are still trying to come up with better solutions. In this work, a flexible and user-guided feature subset selection algorithm, named as FCTFS (Feature Cluster Taxonomy based Feature Selection) has been proposed for selecting suitable feature subset from a large feature set. The proposed algorithm falls under the genre of clustering based feature selection techniques in which features are initially clustered according to their intrinsic characteristics following the filter approach. In the second step the most suitable feature is selected from each cluster to form the final subset following a wrapper approach. The two stage hybrid process lowers the computational cost of subset selection, especially for large feature data sets. One of the main novelty of the proposed approach lies in the process of determining optimal number of feature clusters. Unlike currently available methods, which mostly employ a trial and error approach, the proposed method characterises and quantifies the feature clusters according to the quality of the features inside the clusters and defines a taxonomy of the feature clusters. The selection of individual features from a feature cluster can be done judiciously considering both the relevancy and redundancy according to user’s intention and requirement. The algorithm has been verified by simulation experiments with different bench mark data set containing features ranging from 10 to more than 800 and compared with other currently used feature selection algorithms. The simulation results prove the superiority of our proposal in terms of model performance, flexibility of use in practical problems and extendibility to large feature sets. Though the current proposal is verified in the domain of unsupervised classification, it can be easily used in case of supervised classification.  相似文献   

15.
网络攻击的多步性增加了预测攻击路径的难度,难以对攻击提供有效的安全防护,而传统的解决方案需要花费较高的成本来修复大量的网络漏洞。针对上述问题,对网络攻击的防护问题展开研究,提出一种基于改进蚁群算法的防护策略选择模型(Hardening Measures Selection Mode based on an Improved Ant?Colony?Optimization,HMSMIACO)。该模型由三部分组成:在现有攻击图的基础上,运用能够描述多步原子攻击间因果关系的贝叶斯信念网络构建用于评估网络安全风险的概率攻击图;结合防护成本与收益的量化指标,提出一种能够模拟攻击者决策过程的路径预测算法;鉴于防护策略选择问题是一个NP-hard问题,选择适用于中等规模网络环境的一种改进蚁群算法求解该问题,并获得该网络环境下近似最优的防护策略集。最后,通过实验说明了HMSMIACO在降低网络安全风险问题上的可行性与有效性。  相似文献   

16.
A large number of algorithms have been proposed for feature subset selection. Our experimental results show that the sequential forward floating selection algorithm, proposed by Pudil et al. (1994), dominates the other algorithms tested. We study the problem of choosing an optimal feature set for land use classification based on SAR satellite images using four different texture models. Pooling features derived from different texture models, followed by a feature selection results in a substantial improvement in the classification accuracy. We also illustrate the dangers of using feature selection in small sample size situations  相似文献   

17.
Incremental feature extraction is effective for facilitating the analysis of large-scale streaming data. However, most current incremental feature extraction methods are not suitable for processing streaming data with high feature dimensions because only a few methods have low time complexity, which is linear with both the number of samples and features. In addition, feature extraction methods need to improve the performance of further classification. Therefore, incremental feature extraction methods need to be more efficient and effective. Partial least squares (PLS) is known to be an effective dimension reduction technique for classification. However, the application of PLS to streaming data is still an open problem. In this study, we propose a highly efficient and powerful dimension reduction algorithm called incremental PLS (IPLS), which comprises a two-stage extraction process. In the first stage, the PLS target function is adapted so it is incremental by updating the historical mean to extract the leading projection direction. In the second stage, the other projection directions are calculated based on the equivalence between the PLS vectors and the Krylov sequence. We compared the performance of IPLS with other state-of-the-art incremental feature extraction methods such as incremental principal components analysis, incremental maximum margin criterion, and incremental inter-class scatter using real streaming datasets. Our empirical results showed that IPLS performed better than other methods in terms of its efficiency and further classification accuracy.  相似文献   

18.
Feature subset selection is a substantial problem in the field of data classification tasks. The purpose of feature subset selection is a mechanism to find efficient subset retrieved from original datasets to increase both efficiency and accuracy rate and reduce the costs of data classification. Working on high-dimensional datasets with a very large number of predictive attributes while the number of instances is presented in a low volume needs to be employed techniques to select an optimal feature subset. In this paper, a hybrid method is proposed for efficient subset selection in high-dimensional datasets. The proposed algorithm runs filter-wrapper algorithms in two phases. The symmetrical uncertainty (SU) criterion is exploited to weight features in filter phase for discriminating the classes. In wrapper phase, both FICA (fuzzy imperialist competitive algorithm) and IWSSr (Incremental Wrapper Subset Selection with replacement) in weighted feature space are executed to find relevant attributes. The new scheme is successfully applied on 10 standard high-dimensional datasets, especially within the field of biosciences and medicine, where the number of features compared to the number of samples is large, inducing a severe curse of dimensionality problem. The comparison between the results of our method and other algorithms confirms that our method has the most accuracy rate and it is also able to achieve to the efficient compact subset.  相似文献   

19.
张永  李晓红  樊斌 《计算机工程》2009,35(18):182-184
不等式最大熵模型较为成功地缓解了文本分类任务中的过拟合问题,但它使用的特征选择算法不能完全发挥不等式最大熵的最大优势。针对该问题提出采用改进的顺序前进式选择算法,提高文本分类任务中的识别率,试验结果证明该算法能够更准确地选出文本代表特征,对不等式最大熵模型的分类成绩有一定的改善。  相似文献   

20.
The significance of detection and classification of power quality (PQ) events that disturb the voltage and/or current waveforms in electrical power distribution networks is well known. Consequently, in spite of a large number of research reports in this area, research on the selection of useful features from the existing feature set and the parameter selection for specific classifiers has thus far not been explored. The choice of a smoothing parameter for a probabilistic neural network classifier (PNN) in the training process, together with feature selection, will significantly impact the classification accuracy. In this work, a thorough analysis is carried out, using two wrapper-based optimization techniques—the genetic algorithm and simulated annealing—for identifying the ensemble of celebrated features obtained using discrete wavelet transform together with the smoothing parameter selection of the PNN classifier. As a result of these analyses, the proper smoothing parameter together with a more useful feature set from among a wider set of features for the PNN classifier is obtained with improved classification accuracy. Furthermore, the results show that the performance of simulated annealing is better than the genetic algorithm for feature selection and parameter optimization in Power Quality Data Mining.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号