首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 409 毫秒
1.
In a DNA microarray dataset, gene expression data often has a huge number of features(which are referred to as genes) versus a small size of samples. With the development of DNA microarray technology, the number of dimensions increases even faster than before, which could lead to the problem of the curse of dimensionality. To get good classification performance, it is necessary to preprocess the gene expression data. Support vector machine recursive feature elimination (SVM-RFE) is a classical method for gene selection. However, SVM-RFE suffers from high computational complexity. To remedy it, this paper enhances SVM-RFE for gene selection by incorporating feature clustering, called feature clustering SVM-RFE (FCSVM-RFE). The proposed method first performs gene selection roughly and then ranks the selected genes. First, a clustering algorithm is used to cluster genes into gene groups, in each which genes have similar expression profile. Then, a representative gene is found to represent a gene group. By doing so, we can obtain a representative gene set. Then, SVM-RFE is applied to rank these representative genes. FCSVM-RFE can reduce the computational complexity and the redundancy among genes. Experiments on seven public gene expression datasets show that FCSVM-RFE can achieve a better classification performance and lower computational complexity when compared with the state-the-art-of methods, such as SVM-RFE.  相似文献   

2.
Feature selection is a crucial machine learning technique aimed at reducing the dimensionality of the input space. By discarding useless or redundant variables, not only it improves model performance but also facilitates its interpretability. The well-known Support Vector Machines–Recursive Feature Elimination (SVM-RFE) algorithm provides good performance with moderate computational efforts, in particular for wide datasets. When using SVM-RFE on a multiclass classification problem, the usual strategy is to decompose it into a series of binary ones, and to generate an importance statistics for each feature on each binary problem. These importances are then averaged over the set of binary problems to synthesize a single value for feature ranking. In some cases, however, this procedure can lead to poor selection. In this paper we discuss six new strategies, based on list combination, designed to yield improved selections starting from the importances given by the binary problems. We evaluate them on artificial and real-world datasets, using both One–Vs–One (OVO) and One–Vs–All (OVA) strategies. Our results suggest that the OVO decomposition is most effective for feature selection on multiclass problems. We also find that in most situations the new K-First strategy can find better subsets of features than the traditional weight average approach.  相似文献   

3.
Over the last few years, the dimensionality of datasets involved in data mining applications has increased dramatically. In this situation, feature selection becomes indispensable as it allows for dimensionality reduction and relevance detection. The research proposed in this paper broadens the scope of feature selection by taking into consideration not only the relevance of the features but also their associated costs. A new general framework is proposed, which consists of adding a new term to the evaluation function of a filter feature selection method so that the cost is taken into account. Although the proposed methodology could be applied to any feature selection filter, in this paper the approach is applied to two representative filter methods: Correlation-based Feature Selection (CFS) and Minimal-Redundancy-Maximal-Relevance (mRMR), as an example of use. The behavior of the proposed framework is tested on 17 heterogeneous classification datasets, employing a Support Vector Machine (SVM) as a classifier. The results of the experimental study show that the approach is sound and that it allows the user to reduce the cost without compromising the classification error.  相似文献   

4.
Accurate and fast approaches for automatic ECG data classification are vital for clinical diagnosis of heart disease. To this end, we propose a novel multistage algorithm that combines various procedures for dimensionality reduction, consensus clustering of randomized samples and fast supervised classification algorithms for processing of the highly dimensional large ECG datasets. We carried out extensive experiments to study the effectiveness of the proposed multistage clustering and classification scheme using precision, recall and F-measure metrics. We evaluated the performance of numerous combinations of various methods for dimensionality reduction, consensus functions and classification algorithms incorporated in our multistage scheme. The results of the experiments demonstrate that the highest precision, recall and F-measure are achieved by the combination of the rank correlation coefficient for dimensionality reduction, HBGF consensus function and the SMO classifier with the polynomial kernel.  相似文献   

5.
An efficient filter feature selection (FS) method is proposed in this paper, the SVM-FuzCoC approach, achieving a satisfactory trade-off between classification accuracy and dimensionality reduction. Additionally, the method has reasonably low computational requirements, even in high-dimensional feature spaces. To assess the quality of features, we introduce a local fuzzy evaluation measure with respect to patterns that embraces fuzzy membership degrees of every pattern in their classes. Accordingly, the above measure reveals the adequacy of data coverage provided by each feature. The required membership grades are determined via a novel fuzzy output kernel-based support vector machine, applied on single features. Based on a fuzzy complementary criterion (FuzCoC), the FS procedure iteratively selects features with maximum additional contribution in regard to the information content provided by previously selected features. This search strategy leads to small subsets of powerful and complementary features, alleviating the feature redundancy problem. We also devise different SVM-FuzCoC variants by employing seven other methods to derive fuzzy degrees from SVM outputs, based on probabilistic or fuzzy criteria. Our method is compared with a set of existing FS methods, in terms of performance capability, dimensionality reduction, and computational speed, via a comprehensive experimental setup, including synthetic and real-world datasets.  相似文献   

6.
In many pattern recognition applications, high-dimensional feature vectors impose a high computational cost as well as the risk of “overfitting”. Feature Selection addresses the dimensionality reduction problem by determining a subset of available features which is most essential for classification. This paper presents a novel feature selection method named filtered and supported sequential forward search (FS_SFS) in the context of support vector machines (SVM). In comparison with conventional wrapper methods that employ the SFS strategy, FS_SFS has two important properties to reduce the time of computation. First, it dynamically maintains a subset of samples for the training of SVM. Because not all the available samples participate in the training process, the computational cost to obtain a single SVM classifier is decreased. Secondly, a new criterion, which takes into consideration both the discriminant ability of individual features and the correlation between them, is proposed to effectively filter out nonessential features. As a result, the total number of training is significantly reduced and the overfitting problem is alleviated. The proposed approach is tested on both synthetic and real data to demonstrate its effectiveness and efficiency.  相似文献   

7.
黄晓娟  张莉 《计算机应用》2015,35(10):2798-2802
为处理癌症多分类问题,已经提出了多类支持向量机递归特征消除(MSVM-RFE)方法,但该方法考虑的是所有子分类器的权重融合,忽略了各子分类器自身挑选特征的能力。为提高多分类问题的识别率,提出了一种改进的多类支持向量机递归特征消除(MMSVM-RFE)方法。所提方法利用一对多策略把多类问题化解为多个两类问题,每个两类问题均采用支持向量机递归特征消除来逐渐剔除掉冗余特征,得到一个特征子集;然后将得到的多个特征子集合并得到最终的特征子集;最后用SVM分类器对获得的特征子集进行建模。在3个基因数据集上的实验结果表明,改进的算法整体识别率提高了大约2%,单个类别的精度有大幅度提升甚至100%。与随机森林、k近邻分类器以及主成分分析(PCA)降维方法的比较均验证了所提算法的优势。  相似文献   

8.
Kernel Function in SVM-RFE based Hyperspectral Data band Selection   总被引:2,自引:0,他引:2  
Supporting vector machine recursive feature elimination (SVM-RFE) has a low efficiency when it is applied to band selection for hyperspectral dada,since it usually uses a non-linear kernel and trains SVM every time after deleting a band.Recent research shows that SVM with non-linear kernel doesn’t always perform better than linear one for SVM classification.Similarly,there is some uncertainty on which kernel is better in SVM-RFE based band selection.This paper compares the classification results in SVM-RFE using two SVMs,then designs two optimization strategies for accelerating the band selection process:the percentage accelerated method and the fixed accelerated method.Through an experiment on AVIRIS hyperspectral data,this paper found:① Classification precision of SVM will slightly decrease with the increasing of redundant bands,which means SVM classification needs feature selection in terms of classification accuracy;② The best band collection selected by SVM-RFE with linear SVM that has higher classification accuracy and less effective bands than that with non-linear SVM;③ Both two optimization strategies improved the efficiency of the feature selection,and percentage eliminating performed better than fixed eliminating method in terms of computational efficiency and classification accuracy.  相似文献   

9.
Neural network ensemble based on rough sets reduct is proposed to decrease the computational complexity of conventional ensemble feature selection algorithm. First, a dynamic reduction technology combining genetic algorithm with resampling method is adopted to obtain reducts with good generalization ability. Second, Multiple BP neural networks based on different reducts are built as base classifiers. According to the idea of selective ensemble, the neural network ensemble with best generalization ability can be found by search strategies. Finally, classification based on neural network ensemble is implemented by combining the predictions of component networks with voting. The method has been verified in the experiment of remote sensing image and five UCI datasets classification. Compared with conventional ensemble feature selection algorithms, it costs less time and lower computing complexity, and the classification accuracy is satisfactory.  相似文献   

10.
刘艺  曹建军  刁兴春  周星 《软件学报》2018,29(9):2559-2579
随着大数据的发展和机器学习的广泛应用,各行业的数据量呈现大规模的增长,高维性是这些数据的重要特点,采用特征选择对高维数据进行降维是一种预处理方法.特征选择稳定性是其中重要的研究内容,它是指特征选择方法对训练样本的微小扰动具有一定鲁棒性.提高特征选择稳定性有助于发现相关特征,增强特征可信度,进一步降低开销.在回顾现有特征选择稳定性提升方法的基础上对其进行分类,分析比较各类方法的特点和适用范围,总结特征选择稳定性中的相关评估工作,并通过实验剖析其中稳定性度量指标的性能,进而对比四种集成方法的效用,最后讨论当前工作的局限性,指出未来的研究方向.  相似文献   

11.
DNA microarray is a very active area of research in the molecular diagnosis of cancer. Microarray data are composed of many thousands of features and from tens to hundreds of instances, which make the analysis and diagnosis of cancer very complex. In this case, gene/feature selection becomes an elemental and essential task in data classification. In this paper, we propose a complete cancer diagnostic process through kernel-based learning and feature selection. First, support vector machines recursive feature elimination (SVM-RFE) is used to prefilter the genes. Second, the SVM-RFE is enhanced by using binary dragonfly (BDF), which is a recently developed metaheuristic that has never been benchmarked in the context of feature selection. The objective function is the average of classification accuracy rate generated by three kernel-based learning methods. We conducted a series of experiments on six microarray datasets often used in the literature. Experiment results demonstrate that this approach is efficient and provides a higher classification accuracy rate using a reduced number of genes.  相似文献   

12.
Support vector machines (SVMs) are one of the most popular classification tools and show the most potential to address under-sampled noisy data (a large number of features and a relatively small number of samples). However, the computational cost is too expensive, even for modern-scale samples, and the performance largely depends on the proper setting of parameters. As the data scale increases, the improvement in speed becomes increasingly challenging. As the dimension (feature number) largely increases while the sample size remains small, the avoidance of overfitting becomes a significant challenge. In this study, we propose a two-phase sequential minimal optimization (TSMO) to largely reduce the training cost for large-scale data (tested with 3186–70,000-sample datasets) and a two-phased-in differential-learning particle swarm optimization (tDPSO) to ensure the accuracy for under-sampled data (tested with 2000–24481-feature datasets). Because the purpose of training SVMs is to identify support vectors that denote a hyperplane, TSMO is developed to quickly select support vector candidates from the entire dataset and then identify support vectors from those candidates. In this manner, the computational burden is largely reduced (a 29.4%–65.3% reduction rate). The proposed tDPSO uses topology variation and differential learning to solve PSO’s premature convergence issue. Population diversity is ensured through dynamic topology until a ring connection is achieved (topology-variation phases). Further, particles initiate chemo-type simulated-annealing operations, and the global-best particle takes a two-turn diversion in response to stagnation (event-induced phases). The proposed tDPSO-embedded SVMs were tested with several under-sampled noisy cancer datasets and showed superior performance over various methods, even those methods with feature selection for the preprocessing of data.  相似文献   

13.
基于支持向量机的递归特征消除(SVM-RFE)是目前最主流的基因选择方法之一,是为二分类问题设计的,对于多分类问题必须要进行扩展。从帕累托最优(Pareto Optimum)的概念出发,阐明了常用的基因选择方法在多分类问题中的局限性,提出了基于类别的基因选择过程,并据此提出一种新的SVM-RFE设计方法。8个癌症和肿瘤基因表达谱数据上的实验结果证明了新方法优于另两种递归特征消除方法,为每一类单独寻找最优基因,能够得到更高的分类准确率。  相似文献   

14.
Microarray data classification is a task involving high dimensionality and small samples sizes. A common criterion to decide on the number of selected genes is maximizing the accuracy, which risks overfitting and usually selects more genes than actually needed. We propose, relaxing the maximum accuracy criterion, to select the combination of attribute selection and classification algorithm that using less attributes has an accuracy not statistically significantly worst that the best. Also we give some advice to choose a suitable combination of attribute selection and classifying algorithms for a good accuracy when using a low number of gene expressions. We used some well known attribute selection methods (FCBF, ReliefF and SVM-RFE, plus a Random selection, used as a base line technique) and classifying techniques (Naive Bayes, 3 Nearest Neighbor and SVM with linear kernel) applied to 30 data sets involving different cancer types.  相似文献   

15.
SVM-RFE特征选择算法的算法复杂度高,特征选择消耗时间过长,为了缩短特征选择的时间,针对径向基函数—支持向量机分类器提出了依据核空间类间平均距进行特征选择的算法。首先分析了径向基函数核参数与数据集核空间类间平均距之间的关系,然后提出了依据单个特征对数据集的核空间类间平均距的贡献大小进行特征重要性排序的算法,最后用该算法和SVM-RFE算法分别对8个UCI数据集进行了特征选择实验。实验结果证明了该算法的正确性、有效性,而且特征选择的时间与SVM-RFE算法相比大大减小。  相似文献   

16.
Selecting relevant features for support vector machine (SVM) classifiers is important for a variety of reasons such as generalization performance, computational efficiency, and feature interpretability. Traditional SVM approaches to feature selection typically extract features and learn SVM parameters independently. Independently performing these two steps might result in a loss of information related to the classification process. This paper proposes a convex energy-based framework to jointly perform feature selection and SVM parameter learning for linear and non-linear kernels. Experiments on various databases show significant reduction of features used while maintaining classification performance.  相似文献   

17.
对包含大流量数据的高维度网络进行异常检测,必须加入维数约简处理以减轻系统在传输和存储方面的压力。介绍高速网络环境下网络流量异常检测过程以及维数约简方式,阐述流量数据常用特征和维数约简技术研究的最新进展。针对网络流量特征选择和流量特征提取2种特征降维方式,对现有算法进行归纳分类,分别描述算法原理及优缺点。此外,给出维数约简常用的数据集和评价指标,分析网络流量异常检测中维数约简技术研究面临的挑战,并对未来发展方向进行展望。  相似文献   

18.
林筠超  万源 《计算机应用》2021,41(5):1282-1289
非监督特征选择是机器学习领域的热点研究问题,对于高维数据的降维和分类都极为重要。数据点之间的相似性可以用多个不同的标准来衡量,这使得不同的数据点之间相似性度量标准难以一致;并且现有方法多数通过近邻分配得到相似矩阵,因此其连通分量数通常不够理想。针对这两个问题,将相似矩阵看作变量而非预先对其进行设定,提出了一种基于图结构优化的自适应多度量非监督特征选择(SAM-SGO)方法。该方法将不同的度量函数自适应地融合成一种统一的度量,从而对多种度量方法进行综合,自适应地获得数据的相似矩阵,并且更准确地捕获数据点之间的关系。为获得理想的图结构,通过对相似矩阵的秩进行约束,在优化图局部结构的同时简化了计算。此外,将基于图的降维问题合并到所提出的自适应多度量问题中,并引入稀疏l2,0正则化约束以获得用于特征选择的稀疏投影。在多个标准数据集上的实验验证了SAM-SGO的有效性,相比较于近年所提出的基于局部学习聚类的特征选择和内核学习(LLCFS)、依赖指导的非监督特征选择(DGUFS)和结构化最优图特征选择(SOGFS)方法,该方法的聚类正确率平均提高了约3.6个百分点。  相似文献   

19.
多标记学习是针对一个实例同时与一组标签相关联而提出的一种机器学习框架,是该领域研究热点之一,降维是多标记学习一个重要且具有挑战性的工作。针对有监督的多标记维数约简方法,提出一种无监督自编码网络的多标记降维方法。首先,通过构建自编码神经网络,对输入数据进行编码和解码输出;然后,引入稀疏约束计算总体成本,使用梯度下降法进行迭代求解;最后,通过深度学习训练获得自编码网络学习模型,提取数据特征实现维数约简。实验中使用多标记算法ML-kNN做分类器,在6个公开数据集上与其他4种方法对比。实验结果表明,该方法能够在不使用标记的情况下有效提取特征,降低多标记数据维度,稳定提高多标记学习性能。  相似文献   

20.
郑建炜  李卓蓉  王万良  陈婉君 《软件学报》2019,30(12):3846-3861
在信息爆炸时代,大数据处理已成为当前国内外热点研究方向之一.谱分析型算法因其特有的性能而获得了广泛的应用,然而受维数灾难影响,主流的谱分析法对高维数据的处理仍是一个极具挑战的问题.提出一种兼顾维数特征优选和图Laplacian约束的聚类模型,即联合拉普拉斯正则项和自适应特征学习(joint Laplacian regularization and adaptive feature learning,简称LRAFL)的数据聚类算法.基于自适应近邻进行图拉普拉斯学习,并将低维嵌入、特征选择和子空间聚类纳入同一框架,替换传统谱聚类算法先图Laplacian构建、后谱分析求解的两级操作.通过添加非负加和约束以及低秩约束,LRAFL能获得稀疏的特征权值向量并具有块对角结构的Laplacian矩阵.此外,提出一种有效的求解方法用于模型参数优化,并对算法的收敛性、复杂度以及平衡参数设定进行了理论分析.在合成数据和多个公开数据集上的实验结果表明,LRAFL在效果效率及实现便捷性等指标上均优于现有的其他数据聚类算法.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号