首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Feature selection is about finding useful (relevant) features to describe an application domain. Selecting relevant and enough features to effectively represent and index the given dataset is an important task to solve the classification and clustering problems intelligently. This task is, however, quite difficult to carry out since it usually needs a very time-consuming search to get the features desired. This paper proposes a bit-based feature selection method to find the smallest feature set to represent the indexes of a given dataset. The proposed approach originates from the bitmap indexing and rough set techniques. It consists of two-phases. In the first phase, the given dataset is transformed into a bitmap indexing matrix with some additional data information. In the second phase, a set of relevant and enough features are selected and used to represent the classification indexes of the given dataset. After the relevant and enough features are selected, they can be judged by the domain expertise and the final feature set of the given dataset is thus proposed. Finally, the experimental results on different data sets also show the efficiency and accuracy of the proposed approach.  相似文献   

2.
Given a large set of potential features, it is usually necessary to find a small subset with which to classify. The task of finding an optimal feature set is inherently combinatoric and therefore suboptimal algorithms are typically used to find feature sets. If feature selection is based directly on classification error, then a feature-selection algorithm must base its decision on error estimates. This paper addresses the impact of error estimation on feature selection using two performance measures: comparison of the true error of the optimal feature set with the true error of the feature set found by a feature-selection algorithm, and the number of features among the truly optimal feature set that appear in the feature set found by the algorithm. The study considers seven error estimators applied to three standard suboptimal feature-selection algorithms and exhaustive search, and it considers three different feature-label model distributions. It draws two conclusions for the cases considered: (1) depending on the sample size and the classification rule, feature-selection algorithms can produce feature sets whose corresponding classifiers possess errors far in excess of the classifier corresponding to the optimal feature set; and (2) for small samples, differences in performances among the feature-selection algorithms are less significant than performance differences among the error estimators used to implement the algorithms. Moreover, keeping in mind that results depend on the particular classifier-distribution pair, for the error estimators considered in this study, bootstrap and bolstered resubstitution usually outperform cross-validation, and bolstered resubstitution usually performs as well as or better than bootstrap.  相似文献   

3.
低层特征的选择与提取是自动图像分类的基础,一方面,所选择的图像特征应能代表各种不同的图像属性,利于不同类别图像之间的区分;另一方面,为了提高后续模型的计算效率,需要减少噪声特征、冗余特征.提出了一种基于特征加权的自动图像分类方法.该方法根据图像低层特征分布的离散程度来衡量特征相对于类别的重要性,增加相关度高的特征的权重,降低相关度低的特征权重,从而避免后续模型被弱相关或不相关的特征所支配.所提的特征加权算法主要考察的是特征相对某个具体类别的重要程度,可以为每个类别选择出适合自身的特征权重.然后,将加权特征嵌入到支持向量机算法中用于自动图像分类,在Corel图像数据集上的实验结果表明,基于特征加权的自动图像分类算法可以有效地提高图像分类的准确性.  相似文献   

4.
Algorithms for feature selection in predictive data mining for classification problems attempt to select those features that are relevant, and are not redundant for the classification task. A relevant feature is defined as one which is highly correlated with the target function. One problem with the definition of feature relevance is that there is no universally accepted definition of what it means for a feature to be ‘highly correlated with the target function or highly correlated with the other features’. A new feature selection algorithm which incorporates domain specific definitions of high, medium and low correlations is proposed in this paper. The proposed algorithm conducts a heuristic search for the most relevant features for the prediction task.  相似文献   

5.
Feature selection is a useful pre-processing technique for solving classification problems. The challenge of solving the feature selection problem lies in applying evolutionary algorithms capable of handling the huge number of features typically involved. Generally, given classification data may contain useless, redundant or misleading features. To increase classification accuracy, the primary objective is to remove irrelevant features in the feature space and to correctly identify relevant features. Binary particle swarm optimization (BPSO) has been applied successfully to solving feature selection problems. In this paper, two kinds of chaotic maps—so-called logistic maps and tent maps—are embedded in BPSO. The purpose of chaotic maps is to determine the inertia weight of the BPSO. We propose chaotic binary particle swarm optimization (CBPSO) to implement the feature selection, in which the K-nearest neighbor (K-NN) method with leave-one-out cross-validation (LOOCV) serves as a classifier for evaluating classification accuracies. The proposed feature selection method shows promising results with respect to the number of feature subsets. The classification accuracy is superior to other methods from the literature.  相似文献   

6.
随着互联网和物联网技术的发展,数据的收集变得越发容易。但是,高维数据中包含了很多冗余和不相关的特征,直接使用会徒增模型的计算量,甚至会降低模型的表现性能,故很有必要对高维数据进行降维处理。特征选择可以通过减少特征维度来降低计算开销和去除冗余特征,以提高机器学习模型的性能,并保留了数据的原始特征,具有良好的可解释性。特征选择已经成为机器学习领域中重要的数据预处理步骤之一。粗糙集理论是一种可用于特征选择的有效方法,它可以通过去除冗余信息来保留原始特征的特性。然而,由于计算所有的特征子集组合的开销较大,传统的基于粗糙集的特征选择方法很难找到全局最优的特征子集。针对上述问题,文中提出了一种基于粗糙集和改进鲸鱼优化算法的特征选择方法。为避免鲸鱼算法陷入局部优化,文中提出了种群优化和扰动策略的改进鲸鱼算法。该算法首先随机初始化一系列特征子集,然后用基于粗糙集属性依赖度的目标函数来评价各子集的优劣,最后使用改进鲸鱼优化算法,通过不断迭代找到可接受的近似最优特征子集。在UCI数据集上的实验结果表明,当以支持向量机为评价所用的分类器时,文中提出的算法能找到具有较少信息损失的特征子集,且具有较高的分类精度。因此,所提算法在特征选择方面具有一定的优势。  相似文献   

7.
Feature selection is used to choose a subset of relevant features for effective classification of data. In high dimensional data classification, the performance of a classifier often depends on the feature subset used for classification. In this paper, we introduce a greedy feature selection method using mutual information. This method combines both feature–feature mutual information and feature–class mutual information to find an optimal subset of features to minimize redundancy and to maximize relevance among features. The effectiveness of the selected feature subset is evaluated using multiple classifiers on multiple datasets. The performance of our method both in terms of classification accuracy and execution time performance, has been found significantly high for twelve real-life datasets of varied dimensionality and number of instances when compared with several competing feature selection techniques.  相似文献   

8.
Self-care problems classification is one of the important challenges for occupational therapists. Extent and variety of disorders make the self-care problems classification process complex and time-consuming. To overcome this challenge, an expert model is proposed innovatively in this research. The proposed model is based on Probabilistic Neural Network (PNN) and Genetic Algorithm (GA) for classifying self-care problems of children with physical and motor disability. In this model, PNN is employed as a classifier and GA is applied for feature selection. The PNN is trained by using a standard ICF-CY dataset. Based on ICF-CY, occupational therapists must evaluate many features to diagnose self-care problems. According to the experiences of occupational therapists, these features have different effects on classification. Hence, GA is employed to select relevant and important features in self-care problems classification. Since the classification rules are important for occupational therapists, the self-care problems classification rules are extracted additionally by using the CART algorithm. The experimental results show that by using the feature selection algorithm, the accuracy and time complexity of classification are improved in comparison to other models. The proposed model can classify self-care problems of children with 94.28% accuracy by using only 16.5% of all features.  相似文献   

9.
Feature subset selection is basically an optimization problem for choosing the most important features from various alternatives in order to facilitate classification or mining problems. Though lots of algorithms have been developed so far, none is considered to be the best for all situations and researchers are still trying to come up with better solutions. In this work, a flexible and user-guided feature subset selection algorithm, named as FCTFS (Feature Cluster Taxonomy based Feature Selection) has been proposed for selecting suitable feature subset from a large feature set. The proposed algorithm falls under the genre of clustering based feature selection techniques in which features are initially clustered according to their intrinsic characteristics following the filter approach. In the second step the most suitable feature is selected from each cluster to form the final subset following a wrapper approach. The two stage hybrid process lowers the computational cost of subset selection, especially for large feature data sets. One of the main novelty of the proposed approach lies in the process of determining optimal number of feature clusters. Unlike currently available methods, which mostly employ a trial and error approach, the proposed method characterises and quantifies the feature clusters according to the quality of the features inside the clusters and defines a taxonomy of the feature clusters. The selection of individual features from a feature cluster can be done judiciously considering both the relevancy and redundancy according to user’s intention and requirement. The algorithm has been verified by simulation experiments with different bench mark data set containing features ranging from 10 to more than 800 and compared with other currently used feature selection algorithms. The simulation results prove the superiority of our proposal in terms of model performance, flexibility of use in practical problems and extendibility to large feature sets. Though the current proposal is verified in the domain of unsupervised classification, it can be easily used in case of supervised classification.  相似文献   

10.
Since given classification data often contains redundant, useless or misleading features, feature selection is an important pre-processing step for solving classification problems. This problem is often solved by applying evolutionary algorithms to decrease the dimensional number of features involved. Removing irrelevant features in the feature space and identifying relevant features correctly is the primary objective, which can increase classification accuracy. In this paper, a novel QBGSA–K-NN hybrid system which hybridizes the quantum-inspired binary gravitational search algorithm (QBGSA) with the K-nearest neighbor (K-NN) method with leave-one-out cross-validation (LOOCV) is proposed. The main aim of this system is to improve classification accuracy with an appropriate feature subset in binary problems. We evaluate the proposed hybrid system on several UCI machine learning benchmark examples. The experimental results show that the proposed method is able to select the discriminating input features correctly and achieve high classification accuracy which is comparable to or better than well-known similar classifier systems.  相似文献   

11.
EEG signal analysis involves multi-frequency non-stationary brain waves from multiple channels. Segmenting these signals, extracting features to obtain the important properties of the signal and classification are key aspects of detecting epileptic seizures. Despite the introduction of several techniques, it is very challenging when multiple EEG channels are involved. When many channels exist, a spatial filter is required to eliminate noise and extract relevant information. This adds a new dimension of complexity to the frequency feature space. In order to stabilize the classifier of the channels, feature selection is very important. Furthermore, and to improve the performance of a classifier, more data is required from EEG channels for complex problems. The increase of such data poses some challenges as it becomes difficult to identify the subject dependent bands when the channels increase. Hence, an automated process is required for such identification.The proposed approach in this work tends to tackle the multiple EEG channels problem by segmenting the EEG signals in the frequency domain based on changing spikes rather than the traditional time based windowing approach. While to reduce the overall dimensionality and preserve the class-dependent features an optimization approach is used. This process of selecting an optimal feature subset is an optimization problem. Thus, we propose an adaptive multi-parent crossover Genetic Algorithm (GA) for optimizing the features used in classifying epileptic seizures. The GA-based approach is used to optimize the various features obtained. It encodes the temporal and spatial filter estimates and optimize the feature selection with respect to the classification error. The classification was done using a Support Vector Machine (SVM).The proposed technique was evaluated using the publicly available epileptic seizure data from the machine learning repository of the UCI center for machine learning and intelligent systems. The proposed approach outperforms other ones and achieved a high level of accuracy. These results, indicate the ability of a multi-parent crossover GA in optimizing the feature selection process in EEG classification.  相似文献   

12.
In feature selection problems, strong relevant features may be misjudged as redundant by the approximate Markov blanket. To avoid this, a new concept called strong approximate Markov blanket is proposed. It is theoretically proved that no strong relevant feature will be misjudged as redundant by the proposed concept. To reduce computation time, we propose the concept of modified strong approximate Markov blanket, which still performs better than the approximate Markov blanket in avoiding misjudgment of strong relevant features. A new filter-based feature selection method that is applicable to high-dimensional datasets is further developed. It first groups features to remove redundant features, and then uses a sequential forward selection method to remove irrelevant features. Numerical results on four benchmark and seven real datasets suggest that it is a competitive feature selection method with high classification accuracy, moderate number of selected features, and above-average robustness.  相似文献   

13.
武妍  杨洋 《计算机应用》2006,26(2):433-0435
为了获得重要的特征集合,提出了一种基于判别式分析算法和神经网络的特征选择方法。通过最小化扩展互熵误差函数来训练神经网络,这一误差函数的使用减小了神经网络传输函数的导数,降低了输出敏感度。该方法首先利用判别式分析算法得到一个有序的特征队列,然后通过正则化神经网络进行特征的选择,特征选择过程是基于单个特征的移除带来验证数据集上分类误差变化这一原理。与其他基于不同原理的四种方法进行了比较,实验结果表明,利用该算法训练的网络能够获得较高分类准确率。  相似文献   

14.
张翠军  陈贝贝  周冲  尹心歌 《计算机应用》2018,38(11):3156-3160
针对在分类问题中,数据之间存在大量的冗余特征,不仅影响分类的准确性,而且会降低分类算法执行速度的问题,提出了一种基于多目标骨架粒子群优化(BPSO)的特征选择算法,以获取在特征子集个数与分类精确度之间折中的最优策略。为了提高多目标骨架粒子群优化算法的效率,首先使用了一个外部存档,用来引导粒子的更新方向;然后通过变异算子,改善粒子的搜索空间;最后,将多目标骨架粒子群算法应用到特征选择问题中,并利用K近邻(KNN)分类器的分类性能和特征子集的个数作为特征子集的评价标准,对UCI数据集以及基因表达数据集的12个数据集进行实验。实验结果表明,所提算法选择的特征子集具有较好的分类性能,最小分类错误率最大可以降低7.4%,并且分类算法的执行时间最多能缩短12 s,能够有效提高算法的分类性能与执行速度。  相似文献   

15.
A novel feature selection algorithm is designed for high-dimensional data classification. The relevant features are selected with the least square loss function and \({\ell _{2,1}}\)-norm regularization term if the minimum representation error rate between the features and labels is approached with respect to only these features. Taking into account both the local and global structures of data distribution with subspace learning, an efficient optimization algorithm is proposed to solve the joint objective function, so as to select the most representative features and noise-resistant features to enhance the performance of classification. Sets of experiments are conducted on benchmark datasets, show that the proposed approach is more effective and robust than existing feature selection algorithms.  相似文献   

16.
Feature selection is a challenging task that has been the subject of a large amount of research, especially in relation to classification tasks. It permits to eliminate the redundant attributes and enhance the classification accuracy by keeping only the relevant attributes. In this paper, we propose a hybrid search method based on both harmony search algorithm (HSA) and stochastic local search (SLS) for feature selection in data classification. A novel probabilistic selection strategy is used in HSA–SLS to select the appropriate solutions to undergo stochastic local refinement, keeping a good compromise between exploration and exploitation. In addition, the HSA–SLS is combined with a support vector machine (SVM) classifier with optimized parameters. The proposed HSA–SLS method tries to find a subset of features that maximizes the classification accuracy rate of SVM. Experimental results show good performance in favor of our proposed method.  相似文献   

17.
孟军  尉双云 《计算机科学》2015,42(3):241-244, 260
针对高维数据中的类标记仅与少部分特征关联紧密的问题,提出了基于排序聚合和聚类分组的特征随机选择集成学习方法。采用排序聚合技术对特征进行过滤,选出与样本分类相关的特征,以bicor关联系数作为关联衡量标准,利用近邻传播聚类算法进行分组,使不同组的特征互不关联,然后从每个分组中随机选择一个特征生成特征子集,便可得到多个既存在差异性又具备区分能力的特征子集,最后分别在对应的特征子空间训练基分类器,采用多数投票进行融合集成。在7个基因表达数据集上的实验结果表明,提出的方法分类误差较低,分类性能稳定,可扩展性好。  相似文献   

18.
基于遗传算法及聚类的基因表达数据特征选择   总被引:1,自引:0,他引:1  
特征选择是模式识别及数据挖掘等领域的重要问题之一。针对高维数据对象(如基因表达数据)的特征选择,一方面可以提高分类及聚类的精度和效率,另一方面可以找出富含信息的特征子集,如发现与疾病密切相关的重要基因。针对此问题,本文提出了一种新的面向基因表达数据的特征选择方法,在特征子集搜索上采用遗传算法进行随机搜索,在特征子集评价上采用聚类算法及聚类错误率作为学习算法及评价指标。实验结果表明,该算法可有效地找出具有较好可分离性的特征子集,从而实现降维并提高聚类及分类精度。  相似文献   

19.
在文本分类中,为了降低计算复杂度,常用的特征选取方法(如IG)都假设特征之间条件独立。该假设将引入严重的特征冗余现象。为了降低特征子集的冗余度,本文提出了一种基于最小冗余原则(minimal RedundancyPrinciple,MRP)的特征选取方法。通过考虑不同特征之间的相关性,选择较小冗余度的特征子集。实验结果显示基于最小冗余原则方法能够改善特征选取的效果,提高文本分类的性能。  相似文献   

20.
This paper proposes a filter-based algorithm for feature selection. The filter is based on the partitioning of the set of features into clusters. The number of clusters, and consequently the cardinality of the subset of selected features, is automatically estimated from data. The computational complexity of the proposed algorithm is also investigated. A variant of this filter that considers feature-class correlations is also proposed for classification problems. Empirical results involving ten datasets illustrate the performance of the developed algorithm, which in general has obtained competitive results in terms of classification accuracy when compared to state of the art algorithms that find clusters of features. We show that, if computational efficiency is an important issue, then the proposed filter may be preferred over their counterparts, thus becoming eligible to join a pool of feature selection algorithms to be used in practice. As an additional contribution of this work, a theoretical framework is used to formally analyze some properties of feature selection methods that rely on finding clusters of features.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号