首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
CuBICA, which is an improved method for independent component analysis (ICA) based on the diagonalization of cumulant tensors is proposed. It is based on Comon's algorithm, but it takes third- and fourth-order cumulant tensors into account simultaneously. The underlying contrast function is also mathematically much simpler and has a more intuitive interpretation. It is therefore easier to optimize and approximate. A comparison with Comon's and three other ICA algorithms on different data sets demonstrates its performance.  相似文献   

2.
Quadratic optimization for simultaneous matrix diagonalization   总被引:3,自引:0,他引:3  
Simultaneous diagonalization of a set of matrices is a technique that has numerous applications in statistical signal processing and multivariate statistics. Although objective functions in a least-squares sense can be easily formulated, their minimization is not trivial, because constraints and fourth-order terms are usually involved. Most known optimization algorithms are, therefore, subject to certain restrictions on the class of problems: orthogonal transformations, sets of symmetric, Hermitian or positive definite matrices, to name a few. In this paper, we present a new algorithm called QDIAG that splits the overall optimization problem into a sequence of simpler second order subproblems. There are no restrictions imposed on the transformation matrix, which may be nonorthogonal, indefinite, or even rectangular, and there are no restrictions regarding the symmetry and definiteness of the matrices to be diagonalized, except for one of them. We apply the new method to second-order blind source separation and show that the algorithm converges fast and reliably. It allows for an implementation with a complexity independent of the number of matrices and, therefore, is particularly suitable for problems dealing with large sets of matrices.  相似文献   

3.
A methodology based on the signal separation technique of extended independent component analysis (ICA) is devised to analyse saccade-related electroencephalogram (EEG) waveforms. The methodology enables saccade-related components to be successfully extracted from the EEG mixtures and the brain regions responsible for their generation to be identified  相似文献   

4.
电磁泄漏曲线的对齐与有效点的选取是信息安全的重要研究方向.针对曲线过偏移的问题, 提出了一种新的曲线对齐方法——双峰式相关对齐法.在有效抑制曲线过偏移的同时, 实现了曲线的精确对齐通过独立成分分析(Independent Component Analysis, ICA)方法实现了有效点的选取.通过对电磁泄露曲线求得未知的源信号, 由源信号作为特征点进行分类分析.分别采用ICA、主成分分析(Principal Components Analysis, PCA)、PCA-ICA、ICA-PCA四种方法对数据进行了降维处理.通过支持向量机(Support Vector Machine, SVM)对降维后的数据进行分类对比, 最终得出:在10~100维范围内, PCA-ICA的分类效果最佳, ICA其次, 而ICA-PCA的效果最差; 在100~900维的范围内, PCA与ICA-PCA分类效果随着维度的增加几乎呈直线趋势增加.  相似文献   

5.
A method is presented for the processing of temporal image sequences to enhance a desired process and suppress an undesired (interfering) process and random noise. Furthermore, the processed information is contained in a single frame which is easily interpreted. The method consists of collecting information about the desired and interfering processes from the frames of the given image sequence. The information is in the form of vectors that characterize the temporal properties of the processes. Matrices are formed by performing outer product expansions on these vectors and an eigenvector matrix is found which will simultaneously diagonalize these matrices. By calculating the inner product of a selected eigenvector from this matrix with the image sequence, an enhanced image of the desired process is obtained. A parameter can be adjusted which will increase the amount of suppression for either random noise or the interfering process. At one limit setting of this parameter, a matched filter for the desired process results, while at the other extreme, very high attenuation of the interfering process will occur. Simulations which demonstrate the effectiveness of this technique are presented along with results obtained by processing a radiographic temporal image sequence.  相似文献   

6.
Independent component analysis (ICA), an efficient higher order statistics (HOS) based blind source separation technique, has been successfully applied in various fields. In this paper, we provide an overview of the applications of ICA in multiple-input multiple-output (MIMO) wireless communication systems, and introduce some of the important issues surrounding them. First, we present an ICA based blind equalization scheme for MIMO orthogonal frequency division multiplexing (OFDM) systems, with linear precoding for ambiguity elimination. Second, we discuss three peak-to-average power ratio (PAPR) reduction schemes, which do not introduce any spectral overhead. Third, we investigate the application of ICA to blind compensation for inphase/quadrature (I/Q) imbalance in MIMO OFDM systems. Finally, we present an ICA based semi-blind layer space-frequency equalization (LSFE) structure for single-carrier (SC) MIMO systems. Simulation results show that the ICA based equalization approach provides a much better performance than the subspace method, with significant PAPR reduction. The ICA based I/Q compensation approach outperforms not only the previous compensation methods, but also the case with perfect channel state information (CSI) and no I/Q imbalance, due to additional frequency diversity obtained. The ICA based semi-blind LSFE receiver outperforms its OFDM counterpart significantly with a training overhead of only 0.05%.  相似文献   

7.
《Signal processing》1998,64(3):301-313
A number of neural learning rules have been recently proposed for independent component analysis (ICA). The rules are usually derived from information-theoretic criteria such as maximum entropy or minimum mutual information. In this paper, we show that in fact, ICA can be performed by very simple Hebbian or anti-Hebbian learning rules, which may have only weak relations to such information-theoretical quantities. Rather surprisingly, practically any nonlinear function can be used in the learning rule, provided only that the sign of the Hebbian/anti-Hebbian term is chosen correctly. In addition to the Hebbian-like mechanism, the weight vector is here constrained to have unit norm, and the data is preprocessed by prewhitening, or sphering. These results imply that one can choose the non-linearity so as to optimize desired statistical or numerical criteria. © 1998 Elsevier Science B.V. All rights reserved.ZusammenfassungFür die ICA (independent component analysis) wurden in jüngerer Zeit mehrere neuronale Lernregeln vorgeschlagen. Diese Regeln werden üblicherweise von informationstheoretischen Kriterien, wie maximale Entropie oder minimale wechselseitige Information, abgeleitet. In dieser Arbeit zeigen wir, daß die ICA tatsächlich mit den sehr einfachen Hebb- oder Anti-Hebb-Regeln durchgeführt werden kann, welche nur wenig Beziehung zu informationstheoretischen Größen haben. Es ist überraschend, daß praktisch irgendeine nichtlineare Funktion für die Lernregel verwendet werden kann, solange man gewährleistet, daß das Vorzeichen des Hebbschen Terms richtig gewählt wird. Zusätzlich zum Hebb-ähnlichen Trainingsverfahren wird der Gewichtsvektor auf die Länge eins normiert und die Daten durch Dekorrelieren vorverarbeitet. Die Ergebnisse lassen darauf schließen, daß die Nichtlinearitäten so gewählt werden können, daß gewünschte statistische oder numerische Kriterien optimiert werden. © 1998 Elsevier Science B.V. All rights reserved.RésuméUn certain nombre de règles neuronales d’apprentissage ont été récemment proposées pour l’analyse indépendante de composants (ICA). Les règles sont généralement dérivées de critères basés sur la théorie de I’information tels que la maximisation de l’entropie ou la minimisation de l’information mutuelle. Dans cet article, nous montrons qu’en fail l’ICA peut être effectuée par des règles d’aprentissage Hebbienne ou anti-Hebbienne très simples qui peuvent n’avoir qu’une faible relation avec les quantités basées sur la théorie de l’information. De manière plutôt suprenante, pratiquement n’importe quelle fonction non-linéaire peut être utilisée dans la règle d’aprentissage pour autant que le signe du terme Hebbien/anti-Hebbien soit choisi correctement. En plus de l’introduction d’un mecanisme de type Hebbien, le vecteur des coefficients est dans notre cas contraint à une norme unité et les données sont prétraitées par préblanchiment ou normalisation. Ces résultats impliquent la possibilité de choisir la non-linéarite de manière à optimiser les critères statisitiques ou numériques désirés. © 1998 Elsevier Science B.V. All rights reserved.  相似文献   

8.
姚彦茹  吕威 《信息技术》2007,31(6):46-49,54
引入基于并行处理结构的信号盲分选算法,采用独立分量分析的方法研究从相互独立的未知源信号的混合信号中分离出源信号的问题。运用研究的基于最大负熵化快速独立分量分析的雷达信号分选算法,克服了传统的雷达信号分选方法在信号复杂、存在参数误差等情况下无法正确分选的局限性。  相似文献   

9.
The author presents a method of independent component analysis which assesses the most probable number of source sequences from a larger number of observed sequences and estimates the unknown source sequences and mixing matrix. The estimation of the number of true sources is regarded as a model-order estimation problem and is tackled under a Bayesian paradigm. The method is shown to give good results on both synthetic and real data  相似文献   

10.
Independent component approach to the analysis of EEG and MEG recordings   总被引:13,自引:0,他引:13  
Multichannel recordings of the electromagnetic fields emerging from neural currents in the brain generate large amounts of data. Suitable feature extraction methods are, therefore, useful to facilitate the representation and interpretation of the data. Recently developed independent component analysis (ICA) has been shown to be an efficient tool for artifact identification and extraction from electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings. In addition, ICA has been applied to the analysis of brain signals evoked by sensory stimuli. This paper reviews our recent results in this field.  相似文献   

11.
Accurate force prediction from surface electromyography (EMG) forms an important methodological challenge in biomechanics and kinesiology. In a previous study (Staudenmann et al., 2006), we illustrated force estimates based on analyses lent from multivariate statistics. In particular, we showed the advantages of principal component analysis (PCA) on monopolar high-density EMG (HD-EMG) over conventional electrode configurations. In the present study, we further improve force estimates by exploiting the correlation structure of the HD-EMG via independent component analysis (ICA). HD-EMG from the triceps brachii muscle and the extension force of the elbow were measured in 11 subjects. The root mean square difference (RMSD) and correlation coefficients between predicted and measured force were determined. Relative to using the monopolar EMG data, PCA yielded a 40% reduction in RMSD. ICA yielded a significant further reduction of up to 13% RMSD. Since ICA improved the PCA-based estimates, the independent structure of EMG signals appears to contain relevant additional information for the prediction of muscle force from surface HD-EMG.  相似文献   

12.
We apply a recently developed multivariate statistical data analysis technique--so called blind source separation (BSS) by independent component analysis--to process magnetoencephalogram recordings of near-dc fields. The extraction of near-dc fields from MEG recordings has great relevance for medical applications since slowly varying dc-phenomena have been found, e.g., in cerebral anoxia and spreading depression in animals. Comparing several BSS approaches, it turns out that an algorithm based on temporal decorrelation successfully extracted a dc-component which was induced in the auditory cortex by presentation of music. The task is challenging because of the limited amount of available data and the corruption by outliers, which makes it an interesting real-world testbed for studying the robustness of ICA methods.  相似文献   

13.
In hyperspectral image analysis, the principal components analysis (PCA) and the maximum noise fraction (MNF) are most commonly used techniques for dimensionality reduction (DR), referred to as PCA-DR and MNF-DR, respectively. The criteria used by the PCA-DR and the MNF-DR are data variance and signal-to-noise ratio (SNR) which are designed to measure data second-order statistics. This paper presents an independent component analysis (ICA) approach to DR, to be called ICA-DR which uses mutual information as a criterion to measure data statistical independency that exceeds second-order statistics. As a result, the ICA-DR can capture information that cannot be retained or preserved by second-order statistics-based DR techniques. In order for the ICA-DR to perform effectively, the virtual dimensionality (VD) is introduced to estimate number of dimensions needed to be retained as opposed to the energy percentage that has been used by the PCA-DR and MNF-DR to determine energies contributed by signal sources and noise. Since there is no prioritization among components generated by the ICA-DR due to the use of random initial projection vectors, we further develop criteria and algorithms to measure the significance of information contained in each of ICA-generated components for component prioritization. Finally, a comparative study and analysis is conducted among the three DR techniques, PCA-DR, MNF-DR, and ICA-DR in two applications, endmember extraction and data compression where the proposed ICA-DR has been shown to provide advantages over the PCA-DR and MNF-DR.  相似文献   

14.
利用独立成分分析的高光谱图像波段选择方法   总被引:2,自引:1,他引:2       下载免费PDF全文
提出一种适合目标探测的基于独立成分分析(ICA)的高光谱图像波段选择方法。首先进行"虚拟维"(VD)估计以确定重要独立成分个数,同时对FastICA生成的独立成分排序,选择排序靠前的几个独立成分作为重要独立成分;再根据波段对重要独立成分的平均贡献量对波段排序;最后使用光谱相似性度量去除排序后的冗余波段,保证了最终波段子集含有较多的目标信息。对AVIRIS获取的两幅真实高光谱图像进行了目标探测实验,结果表明,文中方法优于另外两种基于二阶统计特性的波段选择方法,其选出的波段分别占据全部波段的12%和3%,目标探测算子自适应余弦估计(ACE)和自适应匹配滤波(AMF)其上的探测率较全波段分别提高了30%和15%。  相似文献   

15.
(接上期)使用61T丝网时,主要是利用极差分析法对电阻的厚度进行方差分析,各因素对电阻厚度影响的主次关系是:速度>压力>固化时间>固化温度。使用120T的丝网时,主要是利用极差分析法对方电阻进行分析,根据极差分析的原则以及上表所示的计算数据可知:各因素对电阻的影响主次关系是:速度>压力>固化温度>固化时间。  相似文献   

16.
主要介绍了如何控制丝网印刷法制备的嵌入式电阻的厚度。文章重点讨论了压力、速度、以及固化温度和时间等四个因素,并利用优化试验设计法找出影响薄膜电阻层厚度的主要因素。通过实验了解各因素对薄膜电阻层厚度影响的主次关系:并确定各因素的最佳参数。  相似文献   

17.
In this letter, the new concept of Relative Principle Component (RPC) and method of RPC Analysis (RPCA) are put forward. Meanwhile, the concepts such as Relative Transform (RT), Rotundity Scatter (RS) and so on are introduced. This new method can overcome some disadvantages of the classical Principle Component Analysis (PCA) when data are rotundity scatter. The RPC selected by RPCA are more representative, and their significance of geometry is more notable, so that the application of the new algorithm will be very extensive. The performance and effectiveness are simply demonstrated by the geometrical interpretation proposed.  相似文献   

18.
Adaptive Principal component EXtraction (APEX) and applications   总被引:8,自引:0,他引:8  
The authors describe a neural network model (APEX) for multiple principal component extraction. All the synaptic weights of the model are trained with the normalized Hebbian learning rule. The network structure features a hierarchical set of lateral connections among the output units which serve the purpose of weight orthogonalization. This structure also allows the size of the model to grow or shrink without need for retraining the old units. The exponential convergence of the network is formally proved while there is significant performance improvement over previous methods. By establishing an important connection with the recursive least squares algorithm they have been able to provide the optimal size for the learning step-size parameter which leads to a significant improvement in the convergence speed. This is in contrast with previous neural PCA models which lack such numerical advantages. The APEX algorithm is also parallelizable allowing the concurrent extraction of multiple principal components. Furthermore, APEX is shown to be applicable to the constrained PCA problem where the signal variance is maximized under external orthogonality constraints. They then study various principal component analysis (PCA) applications that might benefit from the adaptive solution offered by APEX. In particular they discuss applications in spectral estimation, signal detection and image compression and filtering, while other application domains are also briefly outlined  相似文献   

19.
In this paper, we survey and compare different algorithms that, given an overcomplete dictionary of elementary functions, solve the problem of simultaneous sparse signal approximation, with common sparsity profile induced by a ?p−?q mixed-norm. Such a problem is also known in the statistical learning community as the group lasso problem. We have gathered and detailed different algorithmic results concerning these two equivalent approximation problems. We have also enriched the discussion by providing relations between several algorithms. Experimental comparisons of the detailed algorithms have also been carried out. The main lesson learned from these experiments is that depending on the performance measure, greedy approaches and iterative reweighted algorithms are the most efficient algorithms either in term of computational complexities, sparsity recovery or mean-square error.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号