共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper is concerned with the solution of linear systems with coefficient matrices which are Vandermonde-like matrices modified by adding low-rank corrections. Hereafter we refer to these matrices as modified Vandermonde-like matrices. The solution of modified Vandermonde-like linear systems arises in approximation theory both when we use Remez algorithms for finding minimax approximations and when we consider least squares problems with constraints. Our approach relies on the computation of an inverse QR factorization. More specifically, we show that some classical orthogonalization schemes for m× n, m≥ n, Vandermonde-like matrices can be extended to compute efficiently an inverse QR factorization of modified Vandermonde-like matrices. The resulting algorithm has a cost of O( mn) arithmetical operations. Moreover it requires O( m) storage since the matrices Q and R are not stored. Received: January 1997 / Accepted: November 1997 相似文献
2.
Multimedia Tools and Applications - The feature-based image registration method has a better performance in terms of robustness to the intensity variance, but its accuracy of the feature-based... 相似文献
3.
The problem of low-rank matrix factorization with missing data has attracted many significant attention in the fields related to computer vision. The previous model mainly minimizes the total errors of the recovered low-rank matrix on observed entries. It may produce an optimal solution with less physical meaning. This paper gives a theoretical analysis of the sensitivities of the original model and proposes a modified constrained model and iterative methods for solving the constrained problem. We show that solutions of original model can be arbitrarily far from each others. Two kinds of sufficient conditions of this catastrophic phenomenon are given. In general case, we also give a low bound of error between an ?-optimal solution that is practically obtained in computation and a theoretically optimal solution. A constrained model on missing entries is considered for this missing data problem. We propose a two-step projection method for solving the constrained problem. We also modify the method by a successive alternate technique. The proposed algorithm, named as SALS, is easy to implement, as well as converges very fast even for a large matrix. Numerical experiments on simulation data and real examples are given to illuminate the algorithm behaviors of SALS. 相似文献
4.
A general procedure to construct ADI methods for multidimensional problems was originated by Beam and Warming using the method of approximate factorization. In this paper, we extend the method of approximate factorization to solve a viscous wave equation. The method can be combined with any implicit linear multistep method for the time integration of the wave equation. The stability of the factored schemes and their underlying schemes is analyzed based on a discrete Fourier analysis and the energy method. Convergence proofs are presented and numerical results supporting our analysis are provided. 相似文献
5.
In this article, we present a new structured wavelet algorithm to solve the Ornstein-Zernike integral equation for simple
liquids. This algorithm is based on the discrete wavelet transform of radial distribution functions and different low-rank
approximations of the obtained convolution matrices. The fundamental properties of wavelet bases such as the interpolation
properties and orthogonality are employed to improve the convergence and speed of the algorithm. In order to solve the integral
equation we have applied a combined scheme in which the coarse part of the solution is calculated by the use of wavelets and
Newton-Raphson algorithm, while the fine part is solved by the direct iteration. Tests have indicated that the proposed procedure
is more effective than the conventional method based on hybrid algorithms. 相似文献
6.
We describe the design, implementation and experimental evaluation of new algorithms for computing the approximate factorization of multivariate polynomials with complex coefficients that contain numerical noise. Our algorithms are based on a generalization of the differential forms introduced by W. Ruppert and S. Gao to many variables, and use singular value decomposition or structured total least squares approximation and Gauss–Newton optimization to numerically compute the approximate multivariate factors. We demonstrate on a large set of benchmark polynomials that our algorithms efficiently yield approximate factorizations within the coefficient noise even when the relative error in the input is substantial (10 −3). 相似文献
7.
The successive projection algorithm (SPA) can quickly solve a nonnegative matrix factorization problem under a separability assumption. Even if noise is added to the problem, SPA is robust as long as the perturbations caused by the noise are small. In particular, robustness against noise should be high when handling the problems arising from real applications. The preconditioner proposed by Gillis and Vavasis (SIAM J Optim 25(1):677–698, 2015) makes it possible to enhance the noise robustness of SPA. Meanwhile, an additional computational cost is required. The construction of the preconditioner contains a step to compute the top- k truncated singular value decomposition of an input matrix. It is known that the decomposition provides the best rank- k approximation to the input matrix; in other words, a matrix with the smallest approximation error among all matrices of rank less than k. This step is an obstacle to an efficient implementation of the preconditioned SPA. To address the cost issue, we propose a modification of the algorithm for constructing the preconditioner. Although the original algorithm uses the best rank- k approximation, instead of it, our modification uses an alternative. Ideally, this alternative should have high approximation accuracy and low computational cost. To ensure this, our modification employs a rank- k approximation produced by an SPA based algorithm. We analyze the accuracy of the approximation and evaluate the computational cost of the algorithm. We then present an empirical study revealing the actual performance of the SPA based rank- k approximation algorithm and the modified preconditioned SPA. 相似文献
8.
A new pseudospectral technique for integrating incompressible Navier-Stokes equations with one nonperiodic boundary in Cartesian or cylindrical coordinate system is presented. Algorithm constructed makes use of Chebyshev collocation technique in nonperiodic direction. Special attention is paid to the approximate factorization of the discrete Navier-Stokes equations in cylindrical geometry leading to highly fast and robust numerical procedure providing spectral accuracy. New approach is an efficient tool for further investigation of turbulent shear flows, for physical hypotheses and alternative algorithms testing. Classical problems of incompressible fluid flows in an infinite plane channel and annuli at transitional Reynolds numbers are taken as model ones. 相似文献
9.
The size of the smallest structured destabilizing perturbation for a linear time-invariant system can be calculated via the structured singular value (μ). The function μ can be bounded above by the solution of a convex optimization problem, and in general there is a gap between μ and the convex bound. This paper gives an alternative characterization of μ which is used to study this gap for low-rank matrices. The low-rank characterization provides an easily computed bound which can potentially be significantly better than the standard convex bound. This is used to find new examples with larger gaps than previously known 相似文献
10.
A Schur-type algorithm is presented for the simultaneous triangular factorization of a given (non-degenerate) structured matrix
and its inverse. The algorithm takes the displacement generator of a Hermitian, strongly regular matrix R as an input, and computes the displacement generator of the inverse matrix R
−1 as an output. From these generators we can directly deduce the LD
−1
L
* ( lower-diagonal-upper) decomposition of R, and the UD
−1
U
* ( upper-diagonallower) decomposition of R
−1. The computational complexity of the algorithm is O( rn
2) operations, where n and r denote the size and the displacement rank of R, respectively. Moreover, this method is especially suited for parallel (systolic array) implementations: using n processors the algorithm can be carried out in O( n) steps. 相似文献
11.
Low-rank representations have received a lot of interest in the application of kernel-based methods. However, these methods made an assumption that the spectrum of the Gaussian or polynomial kernels decays rapidly. This is not always true and its violation may result in performance degradation. In this paper, we propose an effective technique for learning low-rank Mercer kernels (LMK) with fast-decaying spectrum. What distinguishes our kernels from other classical kernels (Gaussian and polynomial kernels) is that the proposed always yields low-rank Gram matrices whose spectrum decays rapidly, no matter what distribution the data are. Furthermore, the LMK can control the decay rate. Thus, our kernels can prevent performance degradation while using the low-rank approximations. Our algorithm has favorable in scalability—it is linear in the number of data points and quadratic in the rank of the Gram matrix. Empirical results demonstrate that the proposed method learns fast-decaying spectrum and significantly improves the performance. 相似文献
12.
In the paper, construction of algorithms for factoring linear partial differential operators is studied. New theoretical concepts—obstacle and ring of obstacles—are introduced. These are invariants and posses other interesting properties. A theorem on unique factorization extension starting from some moment, which is important from the standpoint of construction of such algorithms, is proved. Obstacles in the case of the second and third degrees are found. 相似文献
14.
A quasi-Newton method for unconstrained minimization is presented, which uses a Cholesky factorization of an approximation to the Hessian matrix. In each step a new row and column of this approximation matrix is determined and its Cholesky factorization is updated. This reduces storage requirements and simplifies the calculation of the search direction. Precautions are taken to hold the approximation matrix positive definite. It is shown that under usual conditions the method converges superlinearly or even n-step quadratic. 相似文献
15.
A batch process monitoring method using tensor factorization, tensor locality preserving projections (TLPP), is proposed. In many existing vector-based methods on batch process monitoring such as MPCA and MLPP, a batch data is represented as a vector in high-dimensional space. But vectorizing batch data will lead to information loss. Essentially, a batch data is presented as a second order tensor, or a matrix. In this case, tensor factorization may be used to deal with the two-way batch data matrix directly instead of performing vectorizing procedure. Furthermore, tensor representation has some advantages such as low memory and storage requirements and less estimated parameters for normal operating condition (NOC) model. On the other hand, different from principal component analysis (PCA) which aims at preserving the global Euclidean structure of the data, the TLPP aims to preserve the local neighborhood information and to detect the intrinsic manifold structure of the data. Consequently, TLPP may be used to find more meaningful intrinsic information hidden in the observations. The effectiveness and advantages of the TLPP monitoring approach are tested with the data from a benchmark fed-batch penicillin fermentation and two industrial fermentation processes, penicillin and cephalosporin, respectively. 相似文献
16.
Because of the underlying data structure preserved by the manifold regularization term, the Nonnegative matrix factorization (NMF) with manifold regularizer demonstrates an advantage over the variants of NMF for many data analysis tasks. Currently, the Laplacian regularizer is commonly used as the smooth operator to preserve the locality of data space. However, with the Laplacian regularizer, coding vectors are biased to a constant, which leads to a lack of extrapolating power. Thus, the locality of data space cannot be preserved, as would be expected. To address this drawback, a novel variant of NMF, namely HsNMF, is proposed, where the Hessian regularization term is incorporated into the traditional NMF framework. Because Hessian Energy favors the functions whose values vary linearly with respect to the geodesics of the data manifold, the local structure of data space is more effectively preserved. Clustering and classification experimental results on real-world image datasets demonstrate that our proposed NMF is superior to the variants of NMF based on Laplacian Embedding. 相似文献
17.
It is shown that the Lyndon decomposition of a word of n symbols can be computed by an n-processor CRCW PRAM in O(log n) time. Extensions of the basic algorithm convey, within the same time and processors bounds, efficient parallel solutions to problems such as finding the lexicographically minimum or maximum suffix for all prefixes of the input string, and finding the lexicographically least rotation of all prefixes of the input.A. Apostolico's research was supported in part by the French and Italian Ministries of Education, by British Research Council Grant SERC-E76797, by NSF Grants CCR-89-00305 and CCR-9201078, by NIH Library of Medicine Grant R01 LM05118, by AFOSR Grant 89NM682, and by NATO Grant CRG 900293. M. Crochemore's research was supported in part by PRC Mathématiques et Informatique and by NATO Grant CRG 900293. 相似文献
18.
In recent years, nonnegative matrix factorization (NMF) has attracted significant amount of attentions in image processing, text mining, speech processing and related fields. Although NMF has been applied in several application successfully, its simple application on image processing has a few caveats. For example, NMF costs considerable computational resources when performing on large databases. In this paper, we propose two enhanced NMF algorithms for image processing to save the computational costs. One is modified rank-one residue iteration (MRRI) algorithm , the other is element-wisely residue iteration (ERI) algorithm. Here we combine CAPG (a NMF algorithm proposed by Lin), MRRI and ERI with two-dimensional nonnegative matrix factorization (2DNMF) for image processing. The main difference between NMF and 2DNMF is that the former first aligns images into one-dimensional (1D) vectors and then represents them with a set of 1D bases, while the latter regards images as 2D matrices and represents them with a set of 2D bases. The three combined algorithms are named CAPG-2DNMF, MRRI-2DNMF and ERI-2DNMF. The computational complexity and convergence analyses of proposed algorithms are also presented in this paper. Three public databases are used to test the three NMF algorithms and the three combinations, the results of which show the enhancement performance of our proposed algorithms (MRRI and ERI algorithms) over the CAPG algorithm. MRRI and ERI have similar performance. The three combined algorithms have better image reconstruction quality and less running time than their corresponding 1DNMF algorithms under the same compression ratio. We also do some experiments on a real-captured image database and get similar conclusions. 相似文献
19.
Feature selection is one of the most important machine learning procedure, and it has been successfully applied to make a preprocessing before using classification and clustering methods. High-dimensional features often appear in big data, and it’s characters block data processing. So spectral feature selection algorithms have been increasing attention by researchers. However, most feature selection methods, they consider these tasks as two steps, learn similarity matrix from original feature space (may be include redundancy for all features), and then conduct data clustering. Due to these limitations, they do not get good performance on classification and clustering tasks in big data processing applications. To address this problem, we propose an Unsupervised Feature Selection method with graph learning framework, which can reduce the redundancy features influence and utilize a low-rank constraint on the weight matrix simultaneously. More importantly, we design a new objective function to handle this problem. We evaluate our approach by six benchmark datasets. And all empirical classification results show that our new approach outperforms state-of-the-art feature selection approaches. 相似文献
20.
Low-rank representation (LRR) has attracted much attention recently due to its efficacy in a rich variety of real world applications. Recently, the non-convex regularization has become widely used in the rank minimization problem. In this paper, we propose a discriminative low-rank representation with Schatten-p norm (DLRR-SPN) to learn a robust and discriminative affinity matrix for image recognition. To this end, we first impose the Schatten-p norm regularization on the representation matrix to learn the global structure of data. Moreover, the adaptive distance penalty is used to preserve the local neighbor relationship of data. The objective function is formulated as a Schatten-p norm minimization problem, which can be solved via alternating direction method of multipliers (ADMM). To enhance the separation ability of the discriminative affinity matrix for semi-supervised recognition problem, the angular information of the principal directions of the low-rank representation is further exploited. Finally, an effective semi-supervised classifier is utilized on the learned affinity matrix for final prediction. Extensive experimental results on image recognition demonstrate the effectiveness of the proposed method and its superiority in performance over the related state-of-the-art methods. 相似文献
|