首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
抽取最佳鉴别特征是人脸识别中的重要一步。对小样本的高维人脸图像样本,由于各种抽取非线性鉴别特征的方法均存在各自的问题,为此提出了一种求解核的Fisher非线性最佳鉴别特征的新方法,该方法首先在特征空间用类间散度阵和类内散度阵作为Fisher准则,来得到最佳非线性鉴别特征,然后针对此方法存在的病态问题,进一步在类内散度阵的零空间中求解最佳非线性鉴别矢量。基于ORL人脸数据库的实验表明,该新方法抽取的非线性最佳鉴别特征明显优于Fisher线性鉴别分析(FLDA)的线性特征和广义鉴别分析(GDA)的非线性特征。  相似文献   

2.
There are two fundamental problems with the Fisher linear discriminant analysis for face recognition. One is the singularity problem of the within-class scatter matrix due to small training sample size. The other is that it cannot efficiently describe complex nonlinear variations of face images because of its linear property. In this letter, a kernel scatter-difference-based discriminant analysis is proposed to overcome these two problems. We first use the nonlinear kernel trick to map the input data into an implicit feature space F. Then a scatter-difference-based discriminant rule is defined to analyze the data in F. The proposed method can not only produce nonlinear discriminant features but also avoid the singularity problem of the within-class scatter matrix. Extensive experiments show encouraging recognition performance of the new algorithm.  相似文献   

3.
Minimum class variance support vector machine (MCVSVM) and large margin linear projection (LMLP) classifier, in contrast with traditional support vector machine (SVM), take the distribution information of the data into consideration and can obtain better performance. However, in the case of the singularity of the within-class scatter matrix, both MCVSVM and LMLP only exploit the discriminant information in a single subspace of the within-class scatter matrix and discard the discriminant information in the other subspace. In this paper, a so-called twin-space support vector machine (TSSVM) algorithm is proposed to deal with the high-dimensional data classification task where the within-class scatter matrix is singular. TSSVM is rooted in both the non-null space and the null space of the within-class scatter matrix, takes full advantage of the discriminant information in the two subspaces, and so can achieve better classification accuracy. In the paper, we first discuss the linear case of TSSVM, and then develop the nonlinear TSSVM. Experimental results on real datasets validate the effectiveness of TSSVM and indicate its superior performance over MCVSVM and LMLP.  相似文献   

4.
This paper develops a generalized nonlinear discriminant analysis (GNDA) method and deals with its small sample size (SSS) problems. GNDA is a nonlinear extension of linear discriminant analysis (LDA), while kernel Fisher discriminant analysis (KFDA) can be regarded as a special case of GNDA. In LDA, an under sample problem or a small sample size problem occurs when the sample size is less than the sample dimensionality, which will result in the singularity of the within-class scatter matrix. Due to a high-dimensional nonlinear mapping in GNDA, small sample size problems arise rather frequently. To tackle this issue, this research presents five different schemes for GNDA to solve the SSS problems. Experimental results on real-world data sets show that these schemes for GNDA are very effective in tackling small sample size problems.  相似文献   

5.
A modified algorithm for generalized discriminant analysis   总被引:5,自引:0,他引:5  
Zheng W  Zhao L  Zou C 《Neural computation》2004,16(6):1283-1297
Generalized discriminant analysis (GDA) is an extension of the classical linear discriminant analysis (LDA) from linear domain to a nonlinear domain via the kernel trick. However, in the previous algorithm of GDA, the solutions may suffer from the degenerate eigenvalue problem (i.e., several eigenvectors with the same eigenvalue), which makes them not optimal in terms of the discriminant ability. In this letter, we propose a modified algorithm for GDA (MGDA) to solve this problem. The MGDA method aims to remove the degeneracy of GDA and find the optimal discriminant solutions, which maximize the between-class scatter in the subspace spanned by the degenerate eigenvectors of GDA. Theoretical analysis and experimental results on the ORL face database show that the MGDA method achieves better performance than the GDA method.  相似文献   

6.
Linear discriminant analysis (LDA) is a dimension reduction method which finds an optimal linear transformation that maximizes the class separability. However, in undersampled problems where the number of data samples is smaller than the dimension of data space, it is difficult to apply LDA due to the singularity of scatter matrices caused by high dimensionality. In order to make LDA applicable, several generalizations of LDA have been proposed recently. In this paper, we present theoretical and algorithmic relationships among several generalized LDA algorithms and compare their computational complexities and performances in text classification and face recognition. Towards a practical dimension reduction method for high dimensional data, an efficient algorithm is proposed, which reduces the computational complexity greatly while achieving competitive prediction accuracies. We also present nonlinear extensions of these LDA algorithms based on kernel methods. It is shown that a generalized eigenvalue problem can be formulated in the kernel-based feature space, and generalized LDA algorithms are applied to solve the generalized eigenvalue problem, resulting in nonlinear discriminant analysis. Performances of these linear and nonlinear discriminant analysis algorithms are compared extensively.  相似文献   

7.
段旭  林庆  高尚 《计算机工程》2011,37(10):165-166
为解决传统Fisher鉴别分析方法中非线性小样本的特征抽取问题,从核线性子空间角度出发,构造一种矩阵变换,得到核空间中类内散布矩阵的另一个对称核子空间,通过对2个核子空间分别求解,从而得到样本的有效鉴别信息。在NUST603和ORL人脸数据库上的实验结果验证了该算法的有效性。  相似文献   

8.
To address two problems, namely nonlinear problem and singularity problem, of linear discriminant analysis (LDA) approach in face recognition, this paper proposes a novel kernel machine-based rank-lifting regularized discriminant analysis (KRLRDA) method. A rank-lifting theorem is first proven using linear algebraic theory. Combining the rank-lifting strategy with three-to-one regularization technique, the complete regularized methodology is developed on the within-class scatter matrix. The proposed regularized scheme not only adjusts the projection directions but tunes their corresponding weights as well. Moreover, it is shown that the final regularized within-class scatter matrix approaches to the original one as the regularized parameter tends to zero. Two public available databases, namely FERET and CMU PIE face databases, are selected for evaluations. Compared with some existing kernel-based LDA methods, the proposed KRLRDA approach gives superior performance.  相似文献   

9.
Discriminant feature extraction plays a central role in pattern recognition and classification. In this paper, we propose the tensor linear Laplacian discrimination (TLLD) algorithm for extracting discriminant features from tensor data. TLLD is an extension of linear discriminant analysis (LDA) and linear Laplacian discrimination (LLD) in directions of both nonlinear subspace learning and tensor representation. Based on the contextual distance, the weights for the within-class scatters and the between-class scatter can be determined to capture the principal structure of data clusters. This makes TLLD free from the metric of the sample space, which may not be known. Moreover, unlike LLD, the parameter tuning of TLLD is very easy. Experimental results on face recognition, texture classification and handwritten digit recognition show that TLLD is effective in extracting discriminative features.  相似文献   

10.
两类Fisher鉴别准则、大间距线性投影准则以及最大散度差鉴别准则都是直接用于模式分类的两类线性鉴别准则,它们的共同点是将“投影后数据的可分性达到最大的方向”作为最优投影方向。区别在于它们对数据可分性的定义有所不同。过去的研究成果表明,大间距线性投影分类器与支持向量机之间、大间距线性投影准则与最大散度差鉴别准则之间以及最大散度差鉴别准则与两类Fisher鉴别准则之间,均存在着这样或那样的联系。论文试图在以往研究成果的基础上进一步理清这些两类线性鉴别准则之间的内在关系,并建立一个统一的理论框架从而将基于投影后数据可分性的这些两类线性鉴别准则都纳入其中。  相似文献   

11.
吕冰  王士同 《计算机应用》2006,26(11):2781-2783
提出了一种基于核技术的求多元区别分析最佳解的K1PMDA算法,并把这一算法应用于人脸识别中。对线性人脸识别中存在两个突出问题:1、在光照、表情、姿态变化较大时,人脸图像分类是复杂的、非线性的;2、小样本问题,即当训练样本数量小于样本特征空间维数时,导致类内散布矩阵奇异。对于前一个问题,可以采用核技术提取人脸图像样本的非线性特征,对于后一个问题,采用加入一个扰动参数的扰动算法。通过对ORL,Yale Group B以及UMIST三个人脸库的实验表明,该算法是可行的、高效的。  相似文献   

12.
一种基于双向2DMSD的人脸识别方法   总被引:1,自引:1,他引:0  
提出一种基于双向二维最大散度差线性判别分析(Bidirectional 2DMSD)的人脸识别方法.该方法通过在水平和垂直2个方向上顺序执行2次二维最大散度差线性判别分析(2DMSD)运算,将判别特征信息压缩到图像的左上角,大大减少了图像特征的维数;选用二维最小近邻分类法进行分类,计算识别率.在ORL和Yale人脸数据库上的实验结果表明,该方法不仅在识别率上优于最大散度差线性判别分析(MSD),而且在与2DMSD具有相同识别率的情况下,特征维数比2DMSD大大减小,降低了计算复杂度,减少了识别时间,提高了人脸识别效率.  相似文献   

13.
提出了一种基于核技术的融合了反转Fisher鉴别准则和正交化技术的KIOFD(Kernel Inverse Orthogonalized Fisher Discriminant)算法,并把这一算法应用于人脸识别中。线性人脸识别中存在两个突出问题:(1)在光照、表情、姿态变化较大时,人脸图像分类是复杂的、非线性的;(2)小样本问题,即当训练样本数量小于样本特征空间维数时,导致类内散布矩阵奇异。对于第1个问题,可以采用核技术提取人脸图像样本的非线性特征,对于第2个问题,采用了反转Fisher鉴别准则和正交化结合的算法。通过对ORL、Yale Group B以及UMIST3个人脸库的实验表明,提出的算法是可行的、高效的。  相似文献   

14.
最大散度差和大间距线性投影与支持向量机   总被引:34,自引:2,他引:34  
首先对Fisher鉴别准则作了必要的修正,并基于新的鉴别准则设计了最大散度差分 类器;然后探讨了当参数C趋向无穷大时,最大散度差分类器的极限情况,得到了大间距线 性投影分类器;最后通过分析说明,大间距线性投影分类器实际上是在模式样本线性可分的条 件下,线性支持向量机的一种特殊情况.在ORL和NUST603人脸库上的测试结果表明,最 大散度差分类器和大间距线性投影分类器可以与线性支持向量机、不相关线性鉴别分析相媲 美,优于Foley-Sammon鉴别分析方法.  相似文献   

15.
一种核最大散度差判别分析人脸识别方法   总被引:1,自引:1,他引:0  
提出一种有效的非线性子空间学习方法--核最大散度差判别分析(KMSD),并将其用于人脸识别.核最大散度差判别分析首先把输入空间的样本非线性映射到特征空间,然后通过核方法的技巧,采用最大散度差判别分析(MSD)方法在特征空间里求解.在Yale和ORL人脸数据库上的实验结果表明,提出的核最大散度差判别分析方法用于人脸识别具有较高的识别率.  相似文献   

16.
最大散度差鉴别分析及人脸识别   总被引:13,自引:3,他引:13  
传统的Fisher线性鉴别分析(LDA)在人脸等高维图像识别应用中不可避免地遇到小样本问题。提出一种基于散度差准则的鉴别分析方法。与LDA方法不同的是,该方法利用样本模式的类间散布与类内散布之差而不是它们的比作为鉴别准则,这样,从根本上避免了类内散布矩阵奇异带来的困难。在ORL人脸数据库和AR人脸数据库上的实验结果验证算法的有效性。  相似文献   

17.
Unlike linear discriminant analysis, the large margin linear projection (LMLP) classifier presented in this paper, which also roots in linear Fisher discriminant, takes full advantage of the singularity of within-class scatter matrix, and classifies projected points in one-dimensional space by itself. Theoretical analysis and experimental results both reveal that LMLP is well suited for high-dimensional small-sample pattern recognition problems such as face recognition.  相似文献   

18.
Dimensionality reduction is the process of mapping high-dimension patterns to a lower dimension subspace. When done prior to classification, estimates obtained in the lower dimension subspace are more reliable. For some classifiers, there is also an improvement in performance due to the removal of the diluting effect of redundant information. A majority of the present approaches to dimensionality reduction are based on scatter matrices or other statistics of the data which do not directly correlate to classification accuracy. The optimality criteria of choice for the purposes of classification is the Bayes error. Usually however, Bayes error is difficult to express analytically. We propose an optimality criteria based on an approximation of the Bayes error and use it to formulate a linear and a nonlinear method of dimensionality reduction. The nonlinear method we propose, relies on using a multilayered perceptron which produces as output the lower dimensional representation. It thus differs from autoassociative like multilayered perceptrons which have been proposed and used for dimensionality reduction. Our results show that the nonlinear method is, as anticipated, superior to the linear method in that it can perform unfolding of a nonlinear manifold. In addition, the nonlinear method we propose provides substantially better lower dimension representation (for classification purposes) than Fisher's linear discriminant (FLD) and two other nonlinear methods of dimensionality reduction that are often used.  相似文献   

19.
提出一种近邻类鉴别分析方法,线性鉴别分析是该方法的一个特例。线性鉴别分析通过最大化类间散度同时最小化类内散度寻找最佳投影,其中类间散度是所有类之间散度的总体平均;而近邻类鉴别分析中类间散度定义为各个类与其k个近邻类之间的平均散度。该方法通过选取适当的近邻类数,能够缓解线性鉴别降维后造成的部分类的重叠。实验结果表明近邻类鉴别分析方法性能稳定且优于传统的线性鉴别分析。  相似文献   

20.
Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号