首页 | 官方网站   微博 | 高级检索  
     

基于Bhattacharyya距离准则的核空间特征提取算法
引用本文:夏建涛,何明一.基于Bhattacharyya距离准则的核空间特征提取算法[J].计算机学报,2004,27(5):683-689.
作者姓名:夏建涛  何明一
作者单位:西北工业大学电子工程系,西安,710072
基金项目:国家“九七三”重点基础研究发展规划项目基金和教育部博士点基金(2 0 0 0 0 6990 9)资助
摘    要:提出了一种新的以Bhattacharyya距离为准则的核空间特征提取算法.该算法的核心思想是把样本非线性映射到高维核空间.在核空间中寻找一组最优特征向量,然后把样本线性映射到低维特征空间,使类别间的Bhattacharyya距离最大。从而保证Bayes分类误差上界最小.采用核函数技术,把特征提取问题转化为一个QP(Quadratic Programming)优化问题.保证了算法的全局收敛性和快速性.此算法具有两个优点:(1)该算法提取的特征对数据分类来说更有效;(2)对于给定的模式分类问题,算法可以预测出在不损失分类精度情况下所必须的特征向量数目的上界,并能够提取出分类有效特征.实验结果表明,该算法的性能与理论分析的结论相吻合,优于目前常用的特征提取算法.

关 键 词:Bhattacharyya距离  核空间  特征提取  本征维  分类有效特征  QP优化  模式分类  BKFE算法  模式识别

Feature Extraction in Kernel Space Using Bhattacharyya Distance as Criterion Function
XIA Jian-Tao,HE Ming-Yi.Feature Extraction in Kernel Space Using Bhattacharyya Distance as Criterion Function[J].Chinese Journal of Computers,2004,27(5):683-689.
Authors:XIA Jian-Tao  HE Ming-Yi
Abstract:Authors propose a novel approach to feature extraction for classification in kernel space using Bhattacharyya distance, determining the upper-bond of Bayes error, as criterion function, which is called BKFE. The key idea of BKFE is that the data are nonlinearly mapped into high dimensional kernel space at first. Then we can find a set of discriminantly informative features in kernel space to linearly map the data into low dimensional feature space, where the Bhattacharyya distances between classes are maximized. First authors draw on kernel theory and nonlinear optimization technique to develop BKFE for binary classification problem, and solves the feature extraction problem by quadratic programming method, which endows BKFE with fast and global convergence. Then authors extend BKFE to multi-class classification problem. After defining the kernel space feature extraction matrix (KFEM), they discuss theoretically in some depth the relationship exiting between KFEM and intrinsic discriminant dimension of a classification problem, gives Theorem 1 for this relationship. According to theorem 1, the rank of KFEM is equal to the intrinsic discriminant dimension, and its eigenvectors with non-zero eigenvalue are discriminantly informative features. Because of high dimension, the rank and eigenvectors of KFEM are difficult to calculate directly. In order to overcome the difficulty, authors construct a new low dimension kernel matrix, which has the same rank as KFEM, and extract discriminantly informative features by calculating the eigenvectors of the new kernel matrix. In particular, the upper-bond of intrinsic discriminant dimension for a classification problem is M(M-1)/2 (M is class number). It is important because it indicates how many features should be extracted for a given classification problem. Compared with KPCA (Kernel Principal Components Analysis), KFD (Kernel Fisher Discriminate), and FD (Fisher Discriminate), BKFE has two desirable advantages: (1) features extracted by BKFE are more effective for classification; (2) it predicts the upper-bound of the number of necessary features to achieve the same classification accuracy as in the original space for a given pattern recognition problem. Experimental results show that BKFE can provide more informative features for pattern classification than others.
Keywords:feature extraction  Bhattacharyya distance  kernel space  pattern classification  quadratic programming  discriminantly informative feature  intrinsic discriminant dimension
本文献已被 CNKI 维普 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号