首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper proposed a novel feature selection method that includes a self-representation loss function, a graph regularization term and an \({l_{2,1}}\)-norm regularization term. Different from traditional least square loss function which focuses on achieving the minimal regression error between the class labels and their corresponding predictions, the proposed self-representation loss function pushes to represent each feature with a linear combination of its relevant features, aim at effectively selecting representative features and ensuring the robustness to outliers. The graph regularization terms include two kinds of inherent information, i.e., the relationship between samples (the sample–sample relation for short) and the relationship between features (the feature–feature relation for short). The feature–feature relation reflects the similarity between two features and preserves the relation into the coefficient matrix, while the sample–sample relation reflects the similarity between two samples and preserves the relation into the coefficient matrix. The \({l_{2,1}}\)-norm regularization term is used to conduct feature selection, aim at selecting the features, which satisfies the characteristics mentioned above. Furthermore, we put forward a new optimization method to solve our objective function. Finally, we feed reduced data into support vector machine (SVM) to conduct classification on real datasets. The experimental results showed that the proposed method has a better performance comparing with state-of-the-art methods, such as k nearest neighbor, ridge regression, SVM and so on.  相似文献   

2.
With the rapid development of information techniques, the dimensionality of data in many application domains, such as text categorization and bioinformatics, is getting higher and higher. The high‐dimensionality data may bring many adverse situations, such as overfitting, poor performance, and low efficiency, to traditional learning algorithms in pattern classification. Feature selection aims at reducing the dimensionality of data and providing discriminative features for pattern learning algorithms. Due to its effectiveness, feature selection is now gaining increasing attentions from a variety of disciplines and currently many efforts have been attempted in this field. In this paper, we propose a new supervised feature selection method to pick important features by using information criteria. Unlike other selection methods, the main characteristic of our method is that it not only takes both maximal relevance to the class labels and minimal redundancy to the selected features into account, but also works like feature clustering in an agglomerative way. To measure the relevance and redundancy of feature exactly, two different information criteria, i.e., mutual information and coefficient of relevance, have been adopted in our method. The performance evaluations on 12 benchmark data sets show that the proposed method can achieve better performance than other popular feature selection methods in most cases.  相似文献   

3.
Feature selection is crucial, particularly for processing high‐dimensional data. Existing selection methods generally compute a discriminant value for a feature with respect to class variable to indicate its classification ability. However, a scalar value can hardly reveal the multifaceted classification abilities of a feature for different subproblems of a complicated multiclass problem. In view of this, we propose to select features based on discrimination structure complementarity. To this end, the classification abilities of a feature for different subproblems are evaluated individually. Consequently, a discrimination structure vector can be obtained to indicate if the feature is discriminative respectively for different subproblems. Based on discrimination structure, indispensable and dispensable features (ID‐features for short) are defined. In selection process, the ID‐features, which are complementary in discrimination structure to the selected ones, are selected. The proposed method tries to equally treat all subproblems and hence can avoid falling into the pitfall that the discriminative features for difficult subproblems are prone to be covered by the features for easy ones in multi‐class classification. Two algorithms are developed and compared with several feature selection methods using some open data sets. Experimental results demonstrate the effectiveness of the proposed method.  相似文献   

4.
张要  马盈仓  朱恒东  李恒  陈程 《计算机工程》2022,48(3):90-99+106
对于多标签特征选择算法,通常假设数据与标签间呈现某种关系,以该关系为基础并通过正则项的约束可解决多标签特征选择问题,但该关系也可能是两种或多种关系的结合。为准确描述数据与标签间的关系并去除不相关的特征和冗余特征,基于logistic回归模型与标签流形结构提出多标签特征选择算法FSML。使用logistic回归模型的损失函数学习回归系数矩阵,利用标签流形结构学习数据特征的权重矩阵,通过L2,1-范数将系数矩阵和权重矩阵进行柔性结合,约束系数矩阵与权重矩阵的稀疏性并实现多标签特征选择。在经典多标签数据集上的实验结果表明,与CMLS、SCLS等特征选择算法相比,FSML算法在汉明损失、排名损失、1-错误率、覆盖率、平均精度等5个性能评价指标上表现良好,能更准确地描述数据与标签间的关系。  相似文献   

5.
Zhang  Leyuan  Li  Yangding  Zhang  Jilian  Li  Pengqing  Li  Jiaye 《Multimedia Tools and Applications》2019,78(23):33319-33337

The characteristics of non-linear, low-rank, and feature redundancy often appear in high-dimensional data, which have great trouble for further research. Therefore, a low-rank unsupervised feature selection algorithm based on kernel function is proposed. Firstly, each feature is projected into the high-dimensional kernel space by the kernel function to solve the problem of linear inseparability in the low-dimensional space. At the same time, the self-expression form is introduced into the deviation term and the coefficient matrix is processed with low rank and sparsity. Finally, the sparse regularization factor of the coefficient vector of the kernel matrix is introduced to implement feature selection. In this algorithm, kernel matrix is used to solve linear inseparability, low rank constraints to consider the global information of the data, and self-representation form determines the importance of features. Experiments show that comparing with other algorithms, the classification after feature selection using this algorithm can achieve good results.

  相似文献   

6.
运动想象脑电是一种多通道高维信号,特征选择可以降低特征维数,选择更具判别性的特征,从而有效提高脑电解码的性能。现有的特征选择方法主要包括过滤式、包裹式和嵌入式方法,这3类方法各有优缺点。为了综合利用各类方法的优势,提出2种混合特征选择方法。第1种方法,使用最小绝对值收缩和选择算子(LASSO)进行特征选择,得到LASSO模型的权重之后,再设定一系列权重阈值进行二次特征筛选。第2种方法,使用Fisher分数对特征进行评分,然后设定一系列权重阈值进行二次特征筛选。使用Fisher线性判别分析(FLDA)对2种方法选择的特征子集进行分类。在2组脑机接口(BCI)竞赛数据集和1组实验室自采集数据集上进行实验,最高平均分类准确率分别为77.47%、76.11%、71.30%。实验结果表明,所提出的方法其分类性能优于现有的特征选择方法,而且特征选择时间也具有较大优势。  相似文献   

7.
Regression techniques, such as ridge regression (RR) and logistic regression (LR), have been widely used in supervised learning for pattern classification. However, these methods mainly exploit the class label information for linear mapping function learning. They will become less effective when the number of training samples per class is small. In visual classification tasks such as face recognition, the appearance of the training sample images also conveys important discriminative information. This paper proposes a novel regression based classification model, namely Bayesian sample steered discriminative regression (BSDR), which simultaneously exploits the sample class label and the sample appearance for linear mapping function learning by virtue of the Bayesian formula. BSDR learns a linear mapping for each class to extract the image class label features, and classification can be simply done by nearest neighbor classifier. The proposed BSDR method has advantages such as small number of mappings, insensitiveness to input feature dimensionality and robustness to small sample size. Extensive experiments on several biometric databases also demonstrate the promising classification performance of our method.  相似文献   

8.
It is a significant and challenging task to detect the informative features to carry out explainable analysis for high dimensional data, especially for those with very small number of samples. Feature selection especially the unsupervised ones are the right way to deal with this challenge and realize the task. Therefore, two unsupervised spectral feature selection algorithms are proposed in this paper. They group features using advanced Self-Tuning spectral clustering algorithm based on local standard deviation, so as to detect the global optimal feature clusters as far as possible. Then two feature ranking techniques, including cosine-similarity-based feature ranking and entropy-based feature ranking, are proposed, so that the representative feature of each cluster can be detected to comprise the feature subset on which the explainable classification system will be built. The effectiveness of the proposed algorithms is tested on high dimensional benchmark omics datasets and compared to peer methods, and the statistical test are conducted to determine whether or not the proposed spectral feature selection algorithms are significantly different from those of the peer methods. The extensive experiments demonstrate the proposed unsupervised spectral feature selection algorithms outperform the peer ones in comparison, especially the one based on cosine similarity feature ranking technique. The statistical test results show that the entropy feature ranking based spectral feature selection algorithm performs best. The detected features demonstrate strong discriminative capabilities in downstream classifiers for omics data, such that the AI system built on them would be reliable and explainable. It is especially significant in building transparent and trustworthy medical diagnostic systems from an interpretable AI perspective.  相似文献   

9.
针对高维数据具有低秩形式和属性冗余等特点,提出一种基于属性自表达的无监督超图属性选择算法。具体地,该算法首先利用属性自表达特点用其他属性稀疏地表达每个属性,此自表达形式使用低秩假设寻找高维数据的低秩表示,然后建立超图正则化因子保持高维数据的局部结构,最后利用稀疏正则化因子进行属性选择。属性自表达特性确定属性的重要性,低秩表示相当于考虑数据的全局信息进行子空间学习,超图正则化因子考虑数据的局部结构对数据进行子空间学习。该算法实际上考虑数据全局和局部信息进行子空间学习,更是一种嵌入了子空间学习的属性选择算法。实验结果表明,该算法相比其它对比算法,能更有效地选取属性,并能取得很好的分类效果。  相似文献   

10.
在处理高维数据过程中,特征选择是一个非常重要的数据降维步骤。低秩表示模型具有揭示数据全局结构信息的能力和一定的鉴别能力。稀疏表示模型能够利用较少的连接关系揭示数据的本质结构信息。在低秩表示模型的基础上引入稀疏约束项,构建一种低秩稀疏表示模型学习数据间的低秩稀疏相似度矩阵;基于该矩阵提出一种低秩稀疏评分机制用于非监督特征选择。在不同数据库上将选择后的特征进行聚类和分类实验,同传统特征选择算法进行比较。实验结果表明了低秩特征选择算法的有效性。  相似文献   

11.
Genetic algorithms (GAs) have been used as conventional methods for classifiers to adaptively evolve solutions for classification problems. Feature selection plays an important role in finding relevant features in classification. In this paper, feature selection is explored with modular GA-based classification. A new feature selection technique, relative importance factor (RIF), is proposed to find less relevant features in the input domain of each class module. By removing these features, it is aimed to reduce the classification error and dimensionality of classification problems. Benchmark classification data sets are used to evaluate the proposed approach. The experiment results show that RIF can be used to find less relevant features and help achieve lower classification error with the feature space dimension reduced.  相似文献   

12.
Feature selection is known as a good solution to the high dimensionality of the feature space and mostly preferred feature selection methods for text classification are filter-based ones. In a common filter-based feature selection scheme, unique scores are assigned to features depending on their discriminative power and these features are sorted in descending order according to the scores. Then, the last step is to add top-N features to the feature set where N is generally an empirically determined number. In this paper, an improved global feature selection scheme (IGFSS) where the last step in a common feature selection scheme is modified in order to obtain a more representative feature set is proposed. Although feature set constructed by a common feature selection scheme successfully represents some of the classes, a number of classes may not be even represented. Consequently, IGFSS aims to improve the classification performance of global feature selection methods by creating a feature set representing all classes almost equally. For this purpose, a local feature selection method is used in IGFSS to label features according to their discriminative power on classes and these labels are used while producing the feature sets. Experimental results on well-known benchmark datasets with various classifiers indicate that IGFSS improves the performance of classification in terms of two widely-known metrics namely Micro-F1 and Macro-F1.  相似文献   

13.
目的 传统的立体视觉舒适度评价模型,在学习阶段一般采用回归算法,且需要大量的包含主观测试数据的训练样本,针对这个问题,提出一种利用多核增强学习分类算法的立体图像舒适度评价模型。方法 首先,考虑人们在实际观测图像时,对于先后观测到的不同图像进行相互比较的情况,将评价模型看成是偏好分类器,构造包含偏好标签的偏好立体图像对(PSIP),构成PSIP训练集;其次,提取多个视差统计特征和神经学模型响应特征;然后,利用基于AdaBoost的多核学习算法来建立偏好标签与特征之间的关系模型,并分析偏好分类概率(即相对舒适度概率)与最终的视觉舒适度之间的映射关系。结果 在独立立体图像库上,与现有代表性回归算法相比较,本文算法的Pearson线性相关系数(PLCC)在0.84以上,Spearman等级相关系数(SRCC)在0.80以上,均优于其他模型的各评价指标;而在跨库测试中,本文算法的PLCC、SRCC指标均优于传统的支持向量回归算法。结论 相比于传统的回归算法,本文算法具有更好的评价性能,能够更为准确地预测立体图像视觉舒适度。  相似文献   

14.
The aim of this paper is to propose a new hybrid data mining model based on combination of various feature selection and ensemble learning classification algorithms, in order to support decision making process. The model is built through several stages. In the first stage, initial dataset is preprocessed and apart of applying different preprocessing techniques, we paid a great attention to the feature selection. Five different feature selection algorithms were applied and their results, based on ROC and accuracy measures of logistic regression algorithm, were combined based on different voting types. We also proposed a new voting method, called if_any, that outperformed all other voting methods, as well as a single feature selection algorithm's results. In the next stage, a four different classification algorithms, including generalized linear model, support vector machine, naive Bayes and decision tree, were performed based on dataset obtained in the feature selection process. These classifiers were combined in eight different ensemble models using soft voting method. Using the real dataset, the experimental results show that hybrid model that is based on features selected by if_any voting method and ensemble GLM + DT model performs the highest performance and outperforms all other ensemble and single classifier models.  相似文献   

15.
孟军  尉双云 《计算机科学》2015,42(3):241-244, 260
针对高维数据中的类标记仅与少部分特征关联紧密的问题,提出了基于排序聚合和聚类分组的特征随机选择集成学习方法。采用排序聚合技术对特征进行过滤,选出与样本分类相关的特征,以bicor关联系数作为关联衡量标准,利用近邻传播聚类算法进行分组,使不同组的特征互不关联,然后从每个分组中随机选择一个特征生成特征子集,便可得到多个既存在差异性又具备区分能力的特征子集,最后分别在对应的特征子空间训练基分类器,采用多数投票进行融合集成。在7个基因表达数据集上的实验结果表明,提出的方法分类误差较低,分类性能稳定,可扩展性好。  相似文献   

16.
由于无监督环境下特征选择缺少类别信息的依赖,所以利用模糊粗糙集理论提出一种非一致性度量方法DAM(disagreement measure),用于度量任意两个特征集合或特征间引起的模糊等价类含义的差异程度.在此基础上实现DAMUFS无监督特征选择算法,其在无监督条件下可以选择出包含更多信息量的特征子集,同时还保证特征子集中属性冗余度尽可能小.实验将DAMUFS算法与一些无监督以及有监督特征选择算法在多个数据集上进行分类性能比较,结果证明了DAMUFS的有效性.  相似文献   

17.
吴锦华  左开中  接标  丁新涛 《计算机应用》2015,35(10):2752-2756
作为数据预处理的一种常用的手段,特征选择不仅能够提高分类器的分类性能,而且能增加对分类结果的解释性。针对基于稀疏学习的特征选择方法有时会忽略一些有用的判别信息而影响分类性能的问题,提出了一种新的判别性特征选择方法——D-LASSO,用于选择出更具有判别力的特征。首先D-LASSO模型包含一个L1-范式正则化项,用于产生一个稀疏解;其次,为了诱导出更具有判别力的特征,模型中增加了一个新的判别性正则化项,用于保留同类样本以及不同类样本之间几何分布信息,用于诱导出更具有判别力的特征。在一系列Benchmark数据集上的实验结果表明,与已有方法相比较,D-LASSO不仅能进一步提高分类器的分类精度,而且对参数也较为鲁棒。  相似文献   

18.
一种基于信息增益及遗传算法的特征选择算法   总被引:8,自引:0,他引:8  
特征选择是模式识别及数据挖掘等领域的重要问题之一。针对高维数据对象,特征选择一方面可以提高分类精度和效率,另一方面可以找出富含信息的特征子集。针对此问题,本文提出一种综合了filter模型及wrapper模型的特征选择方法,首先基于特征之间的信息增益进行特征分组及筛选,然后针对经过筛选而精简的特征子集采用遗传算法进行随机搜索,并采用感知器模型的分类错误率作为评价指标。实验结果表明,该算法可有效地找出具有较好的线性可分离性的特征子集,从而实现降维并提高分类精度。  相似文献   

19.
Feature selection is one of the most important machine learning procedure, and it has been successfully applied to make a preprocessing before using classification and clustering methods. High-dimensional features often appear in big data, and it’s characters block data processing. So spectral feature selection algorithms have been increasing attention by researchers. However, most feature selection methods, they consider these tasks as two steps, learn similarity matrix from original feature space (may be include redundancy for all features), and then conduct data clustering. Due to these limitations, they do not get good performance on classification and clustering tasks in big data processing applications. To address this problem, we propose an Unsupervised Feature Selection method with graph learning framework, which can reduce the redundancy features influence and utilize a low-rank constraint on the weight matrix simultaneously. More importantly, we design a new objective function to handle this problem. We evaluate our approach by six benchmark datasets. And all empirical classification results show that our new approach outperforms state-of-the-art feature selection approaches.  相似文献   

20.
王俊红  赵彬佳 《计算机工程》2021,47(11):100-107
不平衡分类问题广泛存在于医疗、经济等领域,对于不平衡数据集分类,特别是高维数据分类时,有效的特征选择算法至关重要。然而多数特征选择算法未考虑特征协同的影响,导致分类性能下降。对FAST特征选择算法进行改进,并考虑特征的协同作用,提出一种新的特征选择算法FSBS。运用AUC对特征进行评估,以相互增益衡量协同作用大小,选出有效特征,进而对不平衡数据进行分类。实验结果表明,该算法能有效地选择特征,尤其在特征数量较少的情况下可保持较高的分类准确率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号