首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Multiple undersampled images of a scene are often obtained by using a charge‐coupled device (CCD) detector array of sensors that are shifted relative to each other by subpixel displacements. This geometry of sensors, where each sensor has a subarray of sensing elements of suitable size, has been popular in the task of attaining spatial resolution enhancement from the acquired low‐resolution degraded images that comprise the set of observations. With the objective of improving the performance of the signal processing algorithms in the presence of the ubiquitous perturbation errors of displacements around the ideal subpixel locations (because of imperfections in fabrication), in addition to noisy observation, the errors‐in‐variables or the total least‐squares method is used in this paper. A regularized constrained total least‐squares (RCTLS) solution to the problem is given, which requires the minimization of a nonconvex and nonlinear cost functional. Simulations indicate that the choice of the regularization parameter influences significantly the quality of the solution. The L‐curve method is used to select the theoretically optimum value of the regularization parameter instead of the unsound but expedient trial‐and‐error approach. The expected superiority of this RCTLS approach over the conventional least‐squares theory‐based algorithm is substantiated by example. © 2002 John Wiley & Sons, Inc. Int J Imaging Syst Technol 12, 35–42, 2002  相似文献   

2.
In the analysis of Raman lidar measurements of aerosol extinction, it is necessary to calculate the derivative of the logarithm of the ratio between the atmospheric number density and the range-corrected lidar-received power. The statistical fluctuations of the Raman signal can produce large fluctuations in the derivative and thus in the aerosol extinction profile. To overcome this difficult situation we discuss three methods: Tikhonov regularization, variational, and the sliding best-fit (SBF). Three methods are performed on the profiles taken from the European Aerosol Research Lidar Network lidar database simulated at the Raman shifted wavelengths of 387 and 607 nm associated with the emitted signals at 355 and 532 nm. Our results show that the SBF method does not deliver good results for low fluctuation in the profile. However, Tikhonov regularization and the variational method yield very good aerosol extinction coefficient profiles for our examples. With regard to, e.g., the 532 nm wavelength, the L2 errors of the aerosol extinction coefficient profile by using the SBF, Tikhonov, and variational methods with respect to synthetic noisy data are 0.0015(0.0024), 0.00049(0.00086), and 0.00048(0.00082), respectively. Moreover, the L2 errors by using the Tikhonov and variational methods with respect to a more realistic noisy profile are 0.0014(0.0016) and 0.0012(0.0016), respectively. In both cases the L2 error given in parentheses concerns the second example.  相似文献   

3.
This paper presents a geometric mean scheme (GMS) to determine an optimal regularization factor for Tikhonov regularization technique in the system identification problems of linear elastic continua. The characteristics of non‐linear inverse problems and the role of the regularization are investigated by the singular value decomposition of a sensitivity matrix of responses. It is shown that the regularization results in a solution of a generalized average between the a priori estimates and the a posteriori solution. Based on this observation, the optimal regularization factor is defined as the geometric mean between the maximum singular value and the minimum singular value of the sensitivity matrix of responses. The validity of the GMS is demonstrated through two numerical examples with measurement errors and modelling errors. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

4.
Kolgotin A  Müller D 《Applied optics》2008,47(25):4472-4490
We present the theory of inversion with two-dimensional regularization. We use this novel method to retrieve profiles of microphysical properties of atmospheric particles from profiles of optical properties acquired with multiwavelength Raman lidar. This technique is the first attempt to the best of our knowledge, toward an operational inversion algorithm, which is strongly needed in view of multiwavelength Raman lidar networks. The new algorithm has several advantages over the inversion with so-called classical one-dimensional regularization. Extensive data postprocessing procedures, which are needed to obtain a sensible physical solution space with the classical approach, are reduced. Data analysis, which strongly depends on the experience of the operator, is put on a more objective basis. Thus, we strongly increase unsupervised data analysis. First results from simulation studies show that the new methodology in many cases outperforms our old methodology regarding accuracy of retrieved particle effective radius, and number, surface-area, and volume concentration. The real and the imaginary parts of the complex refractive index can be estimated with at least as equal accuracy as with our old method of inversion with one-dimensional regularization. However, our results on retrieval accuracy still have to be verified in a much larger simulation study.  相似文献   

5.
We introduce a modified Tikhonov regularization method to include three-dimensional x-ray mammography as a prior in the diffuse optical tomography reconstruction. With simulations we show that the optical image reconstruction resolution and contrast are improved by implementing this x-ray-guided spatial constraint. We suggest an approach to find the optimal regularization parameters. The presented preliminary clinical result indicates the utility of the method.  相似文献   

6.
The number, position, area, and width of the bands in a lifetime distribution give the number of exponentials present in time-resolved data and their time constants, amplitudes, and heterogeneities. The maximum entropy inversion of the Laplace transform (MaxEnt-iLT) provides a lifetime distribution from time-resolved data, which is very helpful in the analysis of the relaxation of complex systems. In some applications both positive and negative values for the lifetime distribution amplitudes are physical, but most studies to date have focused on positive-constrained solutions. In this work, we first discuss optimal conditions to obtain a sign-unrestricted maximum entropy lifetime distribution, i.e., the selection of the entropy function and the regularization value. For the selection of the regularization value we compared four methods: the chi2 criterion and Bayesian inference (already used in sign-restricted MaxEnt-iLT), and the L-curve and the generalized cross-validation methods (not yet used in MaxEnt-iLT to our knowledge). Except for the frequently used chi2 criterion, these methods recommended similar regularization values, providing close to optimum solutions. However, even when an optimal entropy function and regularization value are used, a MaxEnt lifetime distribution will contain noise-induced errors, as well as systematic distortions induced by the entropy maximization (regularization-induced errors). We introduce the concept of the apparent resolution function in MaxEnt, which allows both the noise and regularization-induced errors to be estimated. We show the capability of this newly introduced concept in both synthetic and experimental time-resolved Fourier transform infrared (FT-IR) data from the bacteriorhodopsin photocycle.  相似文献   

7.
Steck T  von Clarmann T 《Applied optics》2001,40(21):3559-3571
To investigate the atmosphere of Earth and to detect changes in its environment, the Environmental Satellite will be launched by the European Space Agency in a polar orbit in October 2001. One of its payload instruments is a Fourier spectrometer, the Michelson Interferometer for Passive Atmospheric Sounding, designed to measure the spectral thermal emission of molecules in the atmosphere in a limb-viewing mode. The goal of this experiment is to derive operationally vertical profiles of pressure and temperature as well as of trace gases O(3), H(2)O, CH(4), N(2)O, NO(2), and HNO(3) from spectra on a global scale. A major topic in the analysis of the computational methodology for obtaining the profiles is how available a priori knowledge can be used and how this a priori knowledge affects corresponding results. Retrieval methods were compared and it was shown that an optimal estimation formalism can be used in a highly flexible way for this kind of data analysis. Beyond this, diagnostic tools, such as estimated standard deviation, vertical resolution, or degrees of freedom, have been used to characterize the results. Optimized regularization parameters have been determined, and a great effect from the choice of regularization and discretization on the results was demonstrated. In particular, we show that the optimal estimation formalism can be used to emulate purely smoothing constraints.  相似文献   

8.
Multivariate Curve Resolution (MCR) aims to blindly recover the concentration profile and the source spectra without any prior supervised calibration step. It is well known that imposing additional constraints like positiveness, closure and others may improve the quality of the solution. When a physico-chemical model of the process is known, this can be also introduced constraining even more the solution. In this paper, we apply MCR to Ion Mobility Spectra. Since instrumental models suggest that peaks are of Gaussian shape with a width depending on the instrument resolution, we introduce that each source is characterized by a linear superposition of Gaussian peaks of fixed spread. We also prove that this model is able to fit wider peaks departing from pure Gaussian shape. Instead of introducing a non-linear Gaussian peak fitting, we use a very dense model and rely on a least square solver with L1-norm regularization to obtain a sparse solution. This is accomplished via Least Absolute Shrinkage and Selection Operator (LASSO). Results provide nicely resolved concentration profiles and spectra improving the results of the basic MCR solution.  相似文献   

9.
10.
Wavelength-tuning interferometry can measure surface shapes with discontinuous steps using a unit of synthetic wavelength that is usually larger than the step height. However, measurement resolution decreases for large step heights since the synthetic wavelength becomes much larger than the source wavelength. The excess fraction method with a piezoelectric transducer phase shifting is applied to two-dimensional surface shape measurements. Systematic errors caused by nonlinearity in source frequency scanning are fully corrected by a correlation analysis between the observed and calculated interference fringes. Experiment results demonstrate that the determination of absolute interference order gives the profile of a surface with a step height of 1?mm with an accuracy of 12?nm.  相似文献   

11.
A transient two-dimensional inverse-heat-conduction problem is investigated. It consists in the determination of both temperature and heat-flux density in the vicinity of an angle (≤ 180°) when some internal temperatures are known. Such a problem is solved by using a boundary-element approach with a time- and space-dependent fundamental solution. It uses a time-marching scheme that involves future time steps and a regularization procedure. An exhaustive study of sensitivity to the leading parameters of the problem is produced, and it is especially pointed out that the accuracy of the resolution is strongly affected as the corner angle decreases. In order to overcome this difficulty, a localized regularization procedure is suggested.  相似文献   

12.
Daun KJ  Thomson KA  Liu F  Smallwood GJ 《Applied optics》2006,45(19):4638-4646
We present a method based on Tikhonov regularization for solving one-dimensional inverse tomography problems that arise in combustion applications. In this technique, Tikhonov regularization transforms the ill-conditioned set of equations generated by onion-peeling deconvolution into a well-conditioned set that is less susceptible to measurement errors that arise in experimental settings. The performance of this method is compared to that of onion-peeling and Abel three-point deconvolution by solving for a known field variable distribution from projected data contaminated with an artificially generated error. The results show that Tikhonov deconvolution provides a more accurate field distribution than onion-peeling and Abel three-point deconvolution and is more stable than the other two methods as the distance between projected data points decreases.  相似文献   

13.
This paper presents a system identification scheme to determine the geometric shape of an inclusion in a finite body. The proposed algorithm is based on the minimization of the least‐squared errors between the measured displacement field and calculated displacement field by the finite element model. The domain parameterization technique is adopted to manipulate the shape variation of an inclusion. To stabilize the optimization process, a new regularization function defined by the length of the boundary curve of an inclusion is added to the error function. A variable regularization factor scheme is proposed for a consistent regularization effect. The modified Newton method with the active set method is adopted for optimization. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

14.
Huang XL  Yung YL  Margolis JS 《Applied optics》2003,42(12):2155-2165
We explore ways in which high-spectral-resolution measurements can aid in the retrieval of atmospheric temperature and gas-concentration profiles from outgoing infrared spectra when optically thin cirrus clouds are present. Simulated outgoing spectra that contain cirrus are fitted with spectra that do not contain cirrus, and the residuals are examined. For those lines with weighting functions that peak near the same altitude as the thin cirrus, unique features are observed in the residuals. These unique features are highly sensitive to the resolution of the instrumental line shape. For thin cirrus these residual features are narrow (< or = 0.1 cm(-1)), so high spectral resolution is required for unambiguous observation. The magnitudes of these unique features are larger than the noise of modern instruments. The sensitivities of these features to cloud height and cloud optical depth are also discussed. Our sensitivity studies show that, when the errors in the estimation of temperature profiles are not large, the dominant contribution to the residuals is the misinterpretation of cirrus. An analysis that focuses on information content is also presented. An understanding of the magnitude of the effect and of its dependence on spectral resolution as well as on spectral region is important for retrieving spacecraft data and for the design of future infrared instruments for forecasting weather and monitoring greenhouse gases.  相似文献   

15.
In this paper, numerical solutions are investigated based on the Trefftz method for an over-specified boundary value problem contaminated with artificial noise. The main difficulty of the inverse problem is that divergent results occur when the boundary condition on over-specified boundary is contaminated by artificial random errors. The mechanism of the unreasonable result stems from its ill-posed influence matrix. The accompanied ill-posed problem is remedied by using the Tikhonov regularization technique and the linear regularization method, respectively. This remedy will regularize the influence matrix. The optimal parameter λ of the Tikhonov technique and the linear regularization method can be determined by adopting the adaptive error estimation technique. From this study, convergent numerical solutions of the Trefftz method adopting the optimal parameter can be obtained. To show the accuracy of the numerical solutions, we take the examples as numerical examination. The numerical examination verifies the validity of the adaptive error estimation technique. The comparison of the Tikhonov regularization technique and the linear regularization method was also discussed in the examples.  相似文献   

16.
杨益新  张亚豪  杨龙 《声学技术》2022,41(3):306-312
宽带波达角(Direction of Arrival,DOA)估计是声呐系统阵列信号处理中一个重要的研究方向。文章提出了一种基于相干子空间的改进稀疏与参数方法(Coherent Signal-subspace based Modified Sparse and Parameter Approach,C-MSPA),以实现高精度和高空间方位分辨能力的宽带DOA估计。算法利用聚焦矩阵将各子带上的采样协方差矩阵投影至聚焦频率上。完成聚焦后,文章基于频率选择的范德蒙分解理论对协方差矩阵拟合准则进行改进,使重构的协方差矩阵中包含的DOA信息严格限制在聚焦区域内,最终对重构的协方差矩阵进行范德蒙分解,得到DOA估计值。所提出的算法无需选取正则参数,同时避免了基不匹配问题。仿真和湖上实测数据分析结果表明,所提出的方法实现了高空间方位分辨能力且提高了DOA估计精度。  相似文献   

17.
Heterodyne submillimeter detection techniques represent an important development in the field of remote sensing of atmospheric composition. The disclosure of this wavelength region by new low-noise detectors and multichannel high-resolution spectrometers leads to expectations of improved accuracy and vertical resolution of the vertical composition profiles derived from these measurements. Because of the low-noise levels of newly developed receivers, special care is required to ensure that fundamental limitations of the components used do not contribute to systematic errors exceeding the random errors. Operated in an upward-looking geometry, the sensitivity of the retrieval algorithm to noise and instrumental errors can be rather high, and hence instrumental limitations could induce large uncertainties in the derived atmospheric information. Instrumental uncertainties typical for a passive heterodyne sounder are quantified, and their effects on the accuracy of the derived vertical mixing ratio profiles are presented.  相似文献   

18.
An iterative procedure for unfolding the effects of the finite resolution of a detector from an observed pulse height distribution is discussed. The process is demonstrated for a particular detection system. Convergence and uniqueness properties of the method are discussed empirically.A general expression for the propagated error resulting from errors in the detected pulse height distribution is derived. Approximations are made in order to evaluate the propagated error for a particular detector. These approximations become better as the resolution of the detector improves. The results indicate that the error rapidly approaches a limit of from 1.5 to 3 times the error in the observed distribution. This limit is reached in approximately three iterations.  相似文献   

19.
The goal in inverse electrocardiography (ECG) is to reconstruct cardiac electrical sources from body surface measurements and a mathematical model of torso–heart geometry that relates the sources to the measurements. This problem is ill-posed due to attenuation and smoothing that occur inside the thorax, and small errors in the measurements yield large reconstruction errors. To overcome this, ill-posedness, traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition and statistical approaches such as Bayesian Maximum A Posteriori estimation and Kalman filter have been applied. Statistical methods have yielded accurate inverse solutions; however, they require knowledge of a good a priori probability density function, or state transition definition. Minimum relative entropy (MRE) is an approach for inferring probability density function from a set of constraints and prior information, and may be an alternative to those statistical methods since it operates with more simple prior information definitions. However, success of the MRE method also depends on good choice of prior parameters in the form of upper and lower bound values, expected uncertainty in the model and the prior mean. In this paper, we explore the effects of each of these parameters on the solution of inverse ECG problem and discuss the limitations of the method. Our results show that the prior expected value is the most influential of the three MRE parameters.  相似文献   

20.
The results of spectrophotometric measurements are subject to systematic errors of an instrumental type which may be partially corrected provided a mathematical model of the instrumental imperfections is identified. It is assumed that this model has the form of an integral, convolution-type equation of the first kind. The correction of the results of measurements, subject to random measurement errors, consists of numerically solving this equation on the basis of these results. A correction algorithm, based on the Tikhonov method of frequency-domain regularization, has been implemented using the DSP 56001 digital signal processor. The results of its application are compared with those obtained by means of PC-MATLAB software for the same synthetic data. It is shown that a considerable gain in speed of computation is attained without significant reduction of the accuracy  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号