首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
目的 针对全变分小波修复模型易导致阶梯效应的缺陷,提出一种加权的二阶总广义变分小波修复模型。方法 不同于全变分小波修复模型,假设的新模型引入二阶导数项且能够自动地调解一阶和二阶导数项。另外,为有效地利用图像的局部结构信息,新模型引入了权函数,它既能保护图像的边缘又增强光滑区域的去噪能力。 为有效地计算新模型,利用交替方向法将该模型变为两个子模型, 然后对两个子模型分别给出相应的理论和算法推导。结果 相比最近基于全变分正则小波修复模型(平均信噪比,平均绝对误差及平均结构相似性指标分别为21.884 4,6.857 8,0.827 2),新模型得到更好的修复效果(平均信噪比,平均绝对误差及平均结构相似性指标分别为22.313 8,6.626 1,0.831 8)。结论 与全变分正则相比,二阶总广义变分正则更好地减轻阶梯效应。目前, 国内外学者对该问题的研究取得一些结果。由于原始-对偶算法需要较小的参数,所以运算的速度较慢,因此更快速的算法理论有待进一步研究。另外,该正则能应用于图像去噪、分割、放大等方面。  相似文献   

2.
Algorithms for automatically selecting a scalar or locally varying regularization parameter for total variation models with an \(L^{\tau }\)-data fidelity term, \(\tau \in \{1,2\}\), are presented. The automated selection of the regularization parameter is based on the discrepancy principle, whereby in each iteration a total variation model has to be minimized. In the case of a locally varying parameter, this amounts to solve a multiscale total variation minimization problem. For solving the constituted multiscale total variation model, convergent first- and second-order methods are introduced and analyzed. Numerical experiments for image denoising and image deblurring show the efficiency, the competitiveness, and the performance of the proposed fully automated scalar and locally varying parameter selection algorithms.  相似文献   

3.
A convergent iterative regularization procedure based on the square of a dual norm is introduced for image restoration models with general (quadratic or non-quadratic) convex fidelity terms. Iterative regularization methods have been previously employed for image deblurring or denoising in the presence of Gaussian noise, which use L 2 (Tadmor et?al. in Multiscale Model. Simul. 2:554?C579, 2004; Osher et?al. in Multiscale Model. Simul. 4:460?C489, 2005; Tadmor et?al. in Commun. Math. Sci. 6(2):281?C307, 2008), and L 1 (He et?al. in J.?Math. Imaging Vis. 26:167?C184, 2005) data fidelity terms, with rigorous convergence results. Recently, Iusem and Resmerita (Set-Valued Var. Anal. 18(1):109?C120, 2010) proposed a proximal point method using inexact Bregman distance for minimizing a convex function defined on a non-reflexive Banach space (e.g. BV(??)), which is the dual of a separable Banach space. Based on this method, we investigate several approaches for image restoration such as image deblurring in the presence of noise or image deblurring via (cartoon+texture) decomposition. We show that the resulting proximal point algorithms approximate stably a true image. For image denoising-deblurring we consider Gaussian, Laplace, and Poisson noise models with the corresponding convex fidelity terms as in the Bayesian approach. We test the behavior of proposed algorithms on synthetic and real images in several numerical experiments and compare the results with other state-of-the-art iterative procedures based on the total variation penalization as well as the corresponding existing one-step gradient descent implementations. The numerical experiments indicate that the iterative procedure yields high quality reconstructions and superior results to those obtained by one-step standard gradient descent, with faster computational time.  相似文献   

4.
In Part II of this paper we extend the results obtained in Part I for total variation minimization in image restoration towards the following directions: first we investigate the decomposability property of energies on levels, which leads us to introduce the concept of levelable regularization functions (which TV is the paradigm of). We show that convex levelable posterior energies can be minimized exactly using the level-independant cut optimization scheme seen in Part I. Next we extend this graph cut scheme to the case of non-convex levelable energies.We present convincing restoration results for images corrupted with impulsive noise. We also provide a minimum-cost based algorithm which computes a global minimizer for Markov Random Field with convex priors. Last we show that non-levelable models with convex local conditional posterior energies such as the class of generalized Gaussian models can be exactly minimized with a generalized coupled Simulated Annealing.  相似文献   

5.
Multiplicative noise and blur removal problems have attracted much attention in recent years. In this paper, we propose an efficient minimization method to recover images from input blurred and multiplicative noisy images. In the proposed algorithm, we make use of the logarithm to transform blurring and multiplicative noise problems into additive image degradation problems, and then employ l 1-norm to measure in the data-fitting term and the total variation to measure the regularization term. The alternating direction method of multipliers (ADMM) is used to solve the corresponding minimization problem. In order to guarantee the convergence of the ADMM algorithm, we approximate the associated nonconvex domain of the minimization problem by a convex domain. Experimental results are given to demonstrate that the proposed algorithm performs better than the other existing methods in terms of speed and peak signal noise ratio.  相似文献   

6.
We introduce a general framework for regularization of signals with values in a cyclic structure, such as angles, phases or hue values. These include the total cyclic variation ${TV_{{S}^{1}}}$ , as well as cyclic versions of quadratic regularization, Huber-TV and Mumford-Shah regularity. The key idea is to introduce a convex relaxation of the original non-convex optimization problem. The method handles the periodicity of values in a simple way, is invariant to cyclical shifts and has a number of other useful properties such as lower-semicontinuity. The framework allows general, possibly non-convex data terms. Experimental results are superior to those obtained without special care about wrapping interval end points. Moreover, we propose an equivalent formulation of the total cyclic variation which can be minimized with the same time and memory efficiency as the standard total variation. We show that discretized versions of these regularizers amount to NP-hard optimization problems. Nevertheless, the proposed framework provides optimal or near-optimal solutions in most practical applications.  相似文献   

7.
为有效地保护图像的几何结构,提出了一种非凸二阶总广义变差图像恢复模型。该模型引入了类似于[L0]范数的非凸稀疏正则约束,模型能更好地保护图像的结构特征。为有效地计算该模型,采用迭代重加权和原始-对偶算法。数值实验表明,相比于最近的二阶总广义变差方法,该方法获得了较好的实验结果。  相似文献   

8.
We propose a nonlinear multiscale decomposition of signals defined on the vertex set of a general weighted graph. This decomposition is inspired by the hierarchical multiscale (BV,L 2) decomposition of Tadmor, Nezzar, and Vese (Multiscale Model. Simul. 2(4):554–579, 2004). We find the decomposition by iterative regularization using a graph variant of the classical total variation regularization (Rudin et al, Physica D 60(1–4):259–268, 1992). Using tools from convex analysis, and in particular Moreau’s identity, we carry out the mathematical study of the proposed method, proving the convergence of the representation and providing an energy decomposition result. The choice of the sequence of scales is also addressed. Our study shows that the initial scale can be related to a discrete version of Meyer’s norm (Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations, 2001) which we introduce in the present paper. We propose to use the recent primal-dual algorithm of Chambolle and Pock (J. Math. Imaging Vis. 40:120–145, 2011) in order to compute both the minimizer of the graph total variation and the corresponding dual norm. By applying the graph model to digital images, we investigate the use of nonlocal methods to the multiscale decomposition task. Since the only assumption needed to apply our method is that the input data is living on a graph, we are also able to tackle the task of adaptive multiscale decomposition of irregularly sampled data sets within the same framework. We provide in particular examples of 3-D irregular meshes and point clouds decompositions.  相似文献   

9.
For solving a class of ?2- ?0- regularized problems we convexify the nonconvex ?2- ?0 term with the help of its biconjugate function. The resulting convex program is explicitly given which possesses a very simple structure and can be handled by convex optimization tools and standard softwares. Furthermore, to exploit simultaneously the advantage of convex and nonconvex approximation approaches, we propose a two phases algorithm in which the convex relaxation is used for the first phase and in the second phase an efficient DCA (Difference of Convex functions Algorithm) based algorithm is performed from the solution given by Phase 1. Applications in the context of feature selection in support vector machine learning are presented with experiments on several synthetic and real-world datasets. Comparative numerical results with standard algorithms show the efficiency the potential of the proposed approaches.  相似文献   

10.
X-ray computed tomography (CT) has been playing an important role in diagnostic of cancer and radiotherapy. However, high imaging dose added to healthy organs during CT scans is a serious clinical concern. Imaging dose in CT scans can be reduced by reducing the number of X-ray projections. In this paper, we consider 2D CT reconstructions using very small number of projections. Some regularization based reconstruction methods have already been proposed in the literature for such task, like the total variation (TV) based reconstruction (Sidky and Pan in Phys. Med. Biol. 53:4777, 2008; Sidky et al. in J. X-Ray Sci. Technol. 14(2):119–139, 2006; Jia et al. in Med. Phys. 37:1757, 2010; Choi et al. in Med. Phys. 37:5113, 2010) and balanced approach with wavelet frame based regularization (Jia et al. in Phys. Med. Biol. 56:3787–3807, 2011). For most of the existing methods, at least 40 projections is usually needed to get a satisfactory reconstruction. In order to keep radiation dose as minimal as possible, while increase the quality of the reconstructed images, one needs to enhance the resolution of the projected image in the Radon domain without increasing the total number of projections. The goal of this paper is to propose a CT reconstruction model with wavelet frame based regularization and Radon domain inpainting. The proposed model simultaneously reconstructs a high quality image and its corresponding high resolution measurements in Radon domain. In addition, we discovered that using the isotropic wavelet frame regularization proposed in Cai et al. (Image restorations: total variation, wavelet frames and beyond, 2011, preprint) is superior than using its anisotropic counterpart. Our proposed model, as well as other models presented in this paper, is solved rather efficiently by split Bregman algorithm (Goldstein and Osher in SIAM J. Imaging Sci. 2(2):323–343, 2009; Cai et al. in Multiscale Model. Simul. 8(2):337–369, 2009). Numerical simulations and comparisons will be presented at the end.  相似文献   

11.
12.
\(L_1\) regularization is widely used in various applications for sparsifying transform. In Wasserman et al. (J Sci Comput 65(2):533–552, 2015) the reconstruction of Fourier data with \(L_1\) minimization using sparsity of edges was proposed—the sparse PA method. With the sparse PA method, the given Fourier data are reconstructed on a uniform grid through the convex optimization based on the \(L_1\) regularization of the jump function. In this paper, based on the method proposed by Wasserman et al. (J Sci Comput 65(2):533–552, 2015) we propose to use the domain decomposition method to further enhance the quality of the sparse PA method. The main motivation of this paper is to minimize the global effect of strong edges in \(L_1\) regularization that the reconstructed function near weak edges does not benefit from the sparse PA method. For this, we split the given domain into several subdomains and apply \(L_1\) regularization in each subdomain separately. The split function is not necessarily periodic, so we adopt the Fourier continuation method in each subdomain to find the Fourier coefficients defined in the subdomain that are consistent to the given global Fourier data. The numerical results show that the proposed domain decomposition method yields sharp reconstructions near both strong and weak edges. The proposed method is suitable when the reconstruction is required only locally.  相似文献   

13.
In this paper, a new total generalized variational(TGV) model for restoring images with multiplicative noise is proposed, which contains a nonconvex fidelity term and a TGV term. We use a difference of convex functions algorithm (DCA) to deal with the proposed model. For multiplicative noise removal, there exist many models and algorithms, most of which focus on convex approximation so that numerical algorithms with guaranteed convergence can be designed. Unlike these algorithms, we use the DCA algorithm to remove multiplicative noise. By numerical experiments, it is shown that the proposed approach leads to a better solution compared with the gradient projection algorithm for solving the classic multiplicative noise removal models. We prove that the sequence generated by the DCA algorithm converges to a stationary point, which satisfies the first order optimality condition. Finally, we demonstrate the performance of our whole scheme by numerical examples. A comparison with other methods is provided as well. Numerical results demonstrate that the proposed algorithm significantly outperforms some previous methods for multiplicative Gamma noise removal.  相似文献   

14.
Data of piecewise smooth images are sometimes acquired as Fourier samples. Standard reconstruction techniques yield the Gibbs phenomenon, causing spurious oscillations at jump discontinuities and an overall reduced rate of convergence to first order away from the jumps. Filtering is an inexpensive way to improve the rate of convergence away from the discontinuities, but it has the adverse side effect of blurring the approximation at the jump locations. On the flip side, high resolution post processing algorithms are often computationally cost prohibitive and also require explicit knowledge of all jump locations. Recent convex optimization algorithms using \(l^1\) regularization exploit the expected sparsity of some features of the image. Wavelets or finite differences are often used to generate the corresponding sparsifying transform and work well for piecewise constant images. They are less useful when there is more variation in the image, however. In this paper we develop a convex optimization algorithm that exploits the sparsity in the edges of the underlying image. We use the polynomial annihilation edge detection method to generate the corresponding sparsifying transform. Our method successfully reduces the Gibbs phenomenon with only minimal blurring at the discontinuities while retaining a high rate of convergence in smooth regions.  相似文献   

15.
针对低比特MPEG图像序列出现的压缩痕迹,提出一种后处理正则化方法进行MPEG解码。首先由自适应量化得到一个原始视频DCT系数的量化区间,重建的视频序列投影到此区间内,其次视频是连续的静止图像,得到耦合时间维度和分离时间维度的两种总变分模型,最后利用经典的原-对偶算法求解提出来的凸优化模型,得到后处理MPEG解码视频序列。实验结果表明,总变分正则化函数能够一定程度上减轻压缩痕迹,提高解码视频的质量。  相似文献   

16.
沈马锐  李金城  张亚  邹健 《计算机应用》2020,40(8):2358-2364
针对于核磁共振(MR)图像重构中由于欠采样导致的重构图像不够完整、边缘模糊以及噪声残留等问题,提出了一种基于L2正则的非凸全变差正则重构模型。首先,以Moreau包络和最小最大凹罚函数为工具构造L2范数的非凸正则;然后,将其应用于全变差正则上来构造各向同性的非凸全变差正则稀疏重构模型。所提的非凸正则可以有效地避免凸正则中对较大非零元欠估计现象,能够更有效地重构目标的边缘轮廓;同时,在一定条件下可以保证目标函数的整体凸性,从而最后可以利用交替方向乘子法(ADMM)对模型进行求解。仿真实验对若干MR图像在不同的采样模板和采样率下进行了重构。实验结果均表明,与几种典型的图像重构方法相比,所提模型性能更优,相对误差明显降低,峰值信噪比(PSNR)有明显改善,较经典的L1非凸正则重构模型提升了大约4 dB,并且重构后的图像视觉效果显著提升,有效地保留了原始图像的边缘细节。  相似文献   

17.
We consider the problem of restoring images impaired by noise that is simultaneously structured and multiplicative. Our primary motivation for this setting is the selective plane illumination microscope which often suffers from severe inhomogeneities due to light absorption and scattering. This type of degradation arises in other imaging devices such as ultrasonic imaging. We model the multiplicative noise as a stationary process with known distribution. This leads to a novel convex image restoration model based on a maximum a posteriori estimator. After establishing some analytical properties of the minimizers, we finally propose a fast optimization method on GPU. Numerical experiments on 2D fluorescence microscopy images demonstrate the usefulness of the proposed models in practical applications.  相似文献   

18.
This article introduces a class of piecewise-constant image segmentation models that involves $L^1$ norms as data fidelity measures. The $L^1$ norms enable to segment images with low contrast or outliers such as impulsive noise. The regions to be segmented are represented as smooth functions instead of the Heaviside expression of level set functions as in the level set method. In order to deal with both non-smooth data-fitting and regularization terms, we use the variable splitting scheme to obtain constrained optimization problems, and apply an augmented Lagrangian method to solve the problems. This results in fast and efficient iterative algorithms for piecewise-constant image segmentation. The segmentation framework is extended to vector-valued images as well as to a multi-phase model to deal with arbitrary number of regions. We show comparisons with Chan-Vese models that use $L^2$ fidelity measures, to enlight the benefit of the $L^1$ ones.  相似文献   

19.
Feature selection for logistic regression (LR) is still a challenging subject. In this paper, we present a new feature selection method for logistic regression based on a combination of the zero-norm and l2-norm regularization. However, discontinuity of the zero-norm makes it difficult to find the optimal solution. We apply a proper nonconvex approximation of the zero-norm to derive a robust difference of convex functions (DC) program. Moreover, DC optimization algorithm (DCA) is used to solve the problem effectively and the corresponding DCA converges linearly. Compared with traditional methods, numerical experiments on benchmark datasets show that the proposed method reduces the number of input features while maintaining accuracy. Furthermore, as a practical application, the proposed method is used to directly classify licorice seeds using near-infrared spectroscopy data. The simulation results in different spectral regions illustrates that the proposed method achieves equivalent classification performance to traditional logistic regressions yet suppresses more features. These results show the feasibility and effectiveness of the proposed method.  相似文献   

20.
In this paper we investigate the convergence behavior of a primal-dual splitting method for solving monotone inclusions involving mixtures of composite, Lipschitzian and parallel sum type operators proposed by Combettes and Pesquet (in Set-Valued Var. Anal. 20(2):307–330, 2012). Firstly, in the particular case of convex minimization problems, we derive convergence rates for the partial primal-dual gap function associated to a primal-dual pair of optimization problems by making use of conjugate duality techniques. Secondly, we propose for the general monotone inclusion problem two new schemes which accelerate the sequences of primal and/or dual iterates, provided strong monotonicity assumptions for some of the involved operators are fulfilled. Finally, we apply the theoretical achievements in the context of different types of image restoration problems solved via total variation regularization.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号