首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
本文研究了基于Isomap的非线性降维方法,对由面部表情序列提取的面部动画参数特征进行降维,分析了降维后的流形特征空间与认知心理学情感空间之间的关系。实验结果表明,Isomap降维后的情感流形特征能够表现情感的强度变化,而且比PCA降维特征对情感强度的描述更加合理和平滑;情感识别实验也表明,使用Isomap降维流形特征的识别率要高于原始情感特征和PCA降维特征,而且对各种情感的识别结果更加均衡。  相似文献   

2.
Many image recognition algorithms based on data-learning perform dimensionality reduction before the actual learning and classification because the high dimensionality of raw imagery would require enormous training sets to achieve satisfactory performance. A potential problem with this approach is that most dimensionality reduction techniques, such as principal component analysis (PCA), seek to maximize the representation of data variation into a small number of PCA components, without considering interclass discriminability. This paper presents a neural-network-based transformation that simultaneously seeks to provide dimensionality reduction and a high degree of discriminability by combining together the learning mechanism of a neural-network-based PCA and a backpropagation learning algorithm. The joint discrimination-compression algorithm is applied to infrared imagery to detect military vehicles.  相似文献   

3.
Many problems in information processing involve some form of dimensionality reduction, such as face recognition, image/text retrieval, data visualization, etc. The typical linear dimensionality reduction algorithms include principal component analysis (PCA), random projection, locality-preserving projection (LPP), etc. These techniques are generally unsupervised which allows them to model data in the absence of labels or categories. In this paper, we propose a semi-supervised subspace learning algorithm for image retrieval. In relevance feedback-driven image retrieval system, the user-provided information can be used to better describe the intrinsic semantic relationships between images. Our algorithm is fundamentally based on LPP which can incorporate user's relevance feedbacks. As the user's feedbacks are accumulated, we can ultimately obtain a semantic subspace in which different semantic classes can be best separated and the retrieval performance can be enhanced. We compared our proposed algorithm to PCA and the standard LPP. Experimental results on a large collection of images have shown the effectiveness and efficiency of our proposed algorithm.  相似文献   

4.
使用PCA降维,提取人脸表情特征,并结合基于距离的哈希K近邻分类算法进行人脸表情识别。首先使用类Haar特征和AdaBoost算法进行人脸检测,并对人脸图像进行预处理;接着使用PCA提取人脸表情特征,并将特征加入到哈希表;最后使用K近邻分类算法进行人脸表情的识别。将特征库重构为哈希表后,很大地提高了识别效率。  相似文献   

5.
传统数据降维算法分为线性或流形学习降维算法,但在实际应用中很难确定需要哪一类算法.设计一种综合的数据降维算法,以保证它的线性降维效果下限为主成分分析方法且在流形学习降维方面能揭示流形的数据结构.通过对高维数据构造马尔可夫转移矩阵,使越相似的节点转移概率越大,从而发现高维数据降维到低维流形的映射关系.实验结果表明,在人造...  相似文献   

6.
Parallel processing is essential for large-scale analytics. Principal Component Analysis (PCA) is a well known model for dimensionality reduction in statistical analysis, which requires a demanding number of I/O and CPU operations. In this paper, we study how to compute PCA in parallel. We extend a previous sequential method to a highly parallel algorithm that can compute PCA in one pass on a large data set based on summarization matrices. We also study how to integrate our algorithm with a DBMS; our solution is based on a combination of parallel data set summarization via user-defined aggregations and calling the MKL parallel variant of the LAPACK library to solve Singular Value Decomposition (SVD) in RAM. Our algorithm is theoretically shown to achieve linear speedup, linear scalability on data size, quadratic time on dimensionality (but in RAM), spending most of the time on data set summarization, despite the fact that SVD has cubic time complexity on dimensionality. Experiments with large data sets on multicore CPUs show that our solution is much faster than the R statistical package as well as solving PCA with SQL queries. Benchmarking on multicore CPUs and a parallel DBMS running on multiple nodes confirms linear speedup and linear scalability.  相似文献   

7.
Algorithms on streaming data have attracted increasing attention in the past decade. Among them, dimensionality reduction algorithms are greatly interesting due to the desirability of real tasks. Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two of the most widely used dimensionality reduction approaches. However, PCA is not optimal for general classification problems because it is unsupervised and ignores valuable label information for classification. On the other hand, the performance of LDA is degraded when encountering limited available low-dimensional spaces and singularity problem. Recently, Maximum Margin Criterion (MMC) was proposed to overcome the shortcomings of PCA and LDA. Nevertheless, the original MMC algorithm could not satisfy the streaming data model to handle large-scale high-dimensional data set. Thus an effective, efficient and scalable approach is needed. In this paper, we propose a supervised incremental dimensionality reduction algorithm and its extension to infer adaptive low-dimensional spaces by optimizing the maximum margin criterion. Experimental results on a synthetic dataset and real datasets demonstrate the superior performance of our proposed algorithm on streaming data.  相似文献   

8.
In this paper a generalized tensor subspace model is concluded from the existing tensor dimensionality reduction algorithms. With this model, we investigate the orthogonality of the bases of the high-order tensor subspace, and propose the Orthogonal Tensor Neighborhood Preserving Embedding (OTNPE) algorithm. We evaluate the algorithm by applying it to facial expression recognition, where both the 2nd-order gray-level raw pixels and the encoded 3rd-order tensor-formed Gabor features of facial expression images are utilized. The experiments show the excellent performance of our algorithm for the dimensionality reduction of the tensor-formed data especially when they lie on some smooth and compact manifold embedded in the high dimensional tensor space.  相似文献   

9.
This paper investigates the use of statistical dimensionality reduction (DR) techniques for discriminative low dimensional embedding to enable affective movement recognition. Human movements are defined by a collection of sequential observations (time-series features) representing body joint angle or joint Cartesian trajectories. In this work, these sequential observations are modelled as temporal functions using B-spline basis function expansion, and dimensionality reduction techniques are adapted to enable application to the functional observations. The DR techniques adapted here are: Fischer discriminant analysis (FDA), supervised principal component analysis (PCA), and Isomap. These functional DR techniques along with functional PCA are applied on affective human movement datasets and their performance is evaluated using leave-one-out cross validation with a one-nearest neighbour classifier in the corresponding low-dimensional subspaces. The results show that functional supervised PCA outperforms the other DR techniques examined in terms of classification accuracy and time resource requirements.  相似文献   

10.
针对运用MB-LBP算法提取的人脸特征维数较高、而直接用MB-LBP算法提取的特征进行人脸识别时计算量较大的问题,提出一种融合MB-LBP和Multilinear PCA算法的新的人脸识别方法。首先利用MB-LBP算法提取人脸图像的特征;然后用Multilinear PCA算法对提取的人脸特征进行降维;最后用最近邻分类器进行人脸识别。在FERET人脸库上进行验证,实验结果表明,该方法的识别率高于传统PCA、分块PCA、LBP和PCA相结合的方法。  相似文献   

11.
Image retrieval using nonlinear manifold embedding   总被引:1,自引:0,他引:1  
Can  Jun  Xiaofei  Chun  Jiajun 《Neurocomputing》2009,72(16-18):3922
The huge number of images on the Web gives rise to the content-based image retrieval (CBIR) as the text-based search techniques cannot cater to the needs of precisely retrieving Web images. However, CBIR comes with a fundamental flaw: the semantic gap between high-level semantic concepts and low-level visual features. Consequently, relevance feedback is introduced into CBIR to learn the subjective needs of users. However, in practical applications the limited number of user feedbacks is usually overwhelmed by the large number of dimensionalities of the visual feature space. To address this issue, a novel semi-supervised learning method for dimensionality reduction, namely kernel maximum margin projection (KMMP) is proposed in this paper based on our previous work of maximum margin projection (MMP). Unlike traditional dimensionality reduction algorithms such as principal component analysis (PCA) and linear discriminant analysis (LDA), which only see the global Euclidean structure, KMMP is designed for discovering the local manifold structure. After projecting the images into a lower dimensional subspace, KMMP significantly improves the performance of image retrieval. The experimental results on Corel image database demonstrate the effectiveness of our proposed nonlinear algorithm.  相似文献   

12.
Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A phoneme-independent expression eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and principal component analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation  相似文献   

13.
局部切空间对齐算法的核主成分分析解释   总被引:1,自引:0,他引:1       下载免费PDF全文
基于核方法的降维技术和流形学习是两类有效而广泛应用的非线性降维技术,它们有着各自不同的出发点和理论基础,在以往的研究中很少有研究关注两者的联系。LTSA算法利用数据的局部结构构造一种特殊的核矩阵,然后利用该核矩阵进行核主成分分析。本文针对局部切空间对齐这种流形学习算法,重点研究了LTSA算法与核PCA的内在联系。研究表明,LTSA在本质上是一种基于核方法的主成分分析技术。  相似文献   

14.
基于MPEG-4和PCA的人脸动画合成方法   总被引:1,自引:0,他引:1  
三维人脸动画技术融合了多通道交互技术和多功能感知技术,在虚拟现实、虚拟主持人、虚拟会议、辅助教学、医疗研究、电影制作、游戏娱乐等很多领域有广泛的应用。文章分析了基于MPEG-4标准的人脸动画合成方法,并针对其在实际过程中产生的两个问题,包括人脸模式太多和局部调整带来的影响,分别采用聚类和主成分分析(PCA)的方法进行改进。实验表明基于PCA来解决局部调整人脸模式的方法可以部分实现动画过程中表情的表达,从而极大地提高了人脸动画合成的可操作性。  相似文献   

15.
To improve effectively the performance on spoken emotion recognition, it is needed to perform nonlinear dimensionality reduction for speech data lying on a nonlinear manifold embedded in a high-dimensional acoustic space. In this paper, a new supervised manifold learning algorithm for nonlinear dimensionality reduction, called modified supervised locally linear embedding algorithm (MSLLE) is proposed for spoken emotion recognition. MSLLE aims at enlarging the interclass distance while shrinking the intraclass distance in an effort to promote the discriminating power and generalization ability of low-dimensional embedded data representations. To compare the performance of MSLLE, not only three unsupervised dimensionality reduction methods, i.e., principal component analysis (PCA), locally linear embedding (LLE) and isometric mapping (Isomap), but also five supervised dimensionality reduction methods, i.e., linear discriminant analysis (LDA), supervised locally linear embedding (SLLE), local Fisher discriminant analysis (LFDA), neighborhood component analysis (NCA) and maximally collapsing metric learning (MCML), are used to perform dimensionality reduction on spoken emotion recognition tasks. Experimental results on two emotional speech databases, i.e. the spontaneous Chinese database and the acted Berlin database, confirm the validity and promising performance of the proposed method.  相似文献   

16.
The human heart is a complex system that reveals many clues about its condition in its electrocardiogram (ECG) signal, and ECG supervising is the most important and efficient way of preventing heart attacks. ECG analysis and recognition are both important and tempting topics in modern medical research. The purpose of this paper is to develop an algorithm which investigates kernel method, locally linear embedding (LLE), principal component analysis (PCA), and support vector machine(SVM) algorithms for dimensionality reduction, features extraction, and classification for recognizing and classifying the given ECG signals. In order to do so, a nonlinear dimensionality reduction kernel method based LLE is proposed to reduce the high dimensions of the variational ECG signals, and the principal characteristics of the signals are extracted from the original database by means of the PCA, each signal representing a single and complete heart beat. SVM method is applied to classify the ECG data into several categories of heart diseases. Experimental results obtained demonstrated that the performance of the proposed method was similar and sometimes better when compared to other ECG recognition techniques, thus indicating a viable and accurate technique.  相似文献   

17.
为了解决主成分分析(PCA)算法无法处理高维数据降维后再聚类精确度下降的问题,提出了一种新的属性空间概念,通过属性空间与信息熵的结合构建了基于特征相似度的降维标准,提出了新的降维算法ENPCA。针对降维后特征是原特征的线性组合而导致可解释性变差以及输入不够灵活的问题,提出了基于岭回归的稀疏主成分算法(ESPCA)。ESPCA算法的输入为主成分降维结果,不需要迭代获得稀疏结果,增加了灵活性和求解速度。最后在降维数据的基础上,针对遗传算法聚类收敛速度慢等问题,对遗传算法的初始化、选择、交叉、变异等操作进行改进,提出了新的聚类算法GKA++。实验分析表明EN-PCA算法表现稳定,GKA++算法在聚类有效性和效率方面表现良好。  相似文献   

18.
Process mining techniques have been used to analyze event logs from information systems in order to derive useful patterns. However, in the big data era, real-life event logs are huge, unstructured, and complex so that traditional process mining techniques have difficulties in the analysis of big logs. To reduce the complexity during the analysis, trace clustering can be used to group similar traces together and to mine more structured and simpler process models for each of the clusters locally. However, a high dimensionality of the feature space in which all the traces are presented poses different problems to trace clustering. In this paper, we study the effect of applying dimensionality reduction (preprocessing) techniques on the performance of trace clustering. In our experimental study we use three popular feature transformation techniques; singular value decomposition (SVD), random projection (RP), and principal components analysis (PCA), and the state-of-the art trace clustering in process mining. The experimental results on the dataset constructed from a real event log recorded from patient treatment processes in a Dutch hospital show that dimensionality reduction can improve trace clustering performance with respect to the computation time and average fitness of the mined local process models.  相似文献   

19.
将C1特征应用于静态图像人脸表情识别,提出了一种新的基于生物启发特征和SVM的表情识别算法。提取人脸图像的C1特征,利用PCA+LDA方法对特征进行降维,用SVM进行分类。在JAFFE和Extended Cohn-Kanade(CK+)人脸表情数据库上的实验结果表明,该算法具有较高的识别率,是一种有效的人脸表情识别方法。  相似文献   

20.
Principal Component Analysis (PCA) is perhaps the most prominent learning tool for dimensionality reduction in pattern recognition and computer vision. However, the ? 2-norm employed by standard PCA is not robust to outliers. In this paper, we propose a kernel PCA method for fast and robust PCA, which we call Euler-PCA (e-PCA). In particular, our algorithm utilizes a robust dissimilarity measure based on the Euler representation of complex numbers. We show that Euler-PCA retains PCA’s desirable properties while suppressing outliers. Moreover, we formulate Euler-PCA in an incremental learning framework which allows for efficient computation. In our experiments we apply Euler-PCA to three different computer vision applications for which our method performs comparably with other state-of-the-art approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号