首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
Motion deblurring is one of the basic problems inthe field of image processing. This paper summarizes the mathematical basis of the previous work and presents a deblurringmethod that can improve the estimation of the motion blurkernel and obtain a better result than the traditional methods.Experiments show the motion blur kernel loses some important and useful properties during the estimation of the kernel which may cause a bad estimation and increase the ringingartifacts. Considering that the kernel is provided by the motion of the imaging sensor during the exposure and that the kernel shows the trace of the motion, this paper ensures the physical meaning of the kernel such as the continuity and the center of thekernel during the iterative process. By adding a post process to the estimation of the motion blur kernel, we remove some discrete points and make use of the centralizationof the kernel in order to accurate the estimation. The experiment shows the existence of the post process improves the effect of the estimation of the kernel and provides a better result with the clear edges.  相似文献   

2.
In many data stream mining applications, traditional density estimation methods such as kemel density estimation, reduced set density estimation can not be applied to the density estimation of data streams because of their high computational burden, processing time and intensive memory allocation requirement. In order to reduce the time and space complexity, a novel density estimation method Dm-KDE over data streams based on the proposed algorithm m-KDE which can be used to design a KDE estimator with the fixed number of kernel components for a dataset is proposed. In this method, Dm-KDE sequence entries are created by algorithm m-KDE instead of all kemels obtained from other density estimation methods. In order to further reduce the storage space, Dm-KDE sequence entries can be merged by calculating their KL divergences. Finally, the probability density functions over arbitrary time or entire time can be estimated through the obtained estimation model. In contrast to the state-of-the-art algorithm SOMKE, the distinctive advantage of the proposed algorithm Dm-KDE exists in that it can achieve the same accuracy with much less fixed number of kernel components such that it is suitable for the scenarios where higher on-line computation about the kernel density estimation over data streams is required. We compare Dm-KDE with SOMKE and M-kernel in terms of density estimation accuracy and running time for various stationary datasets. We also apply Dm-KDE to evolving data streams. Experimental results illustrate the effectiveness of the pro- posed method.  相似文献   

3.
In this paper, we present an adaptive two-step contourlet-wavelet iterative shrinkage/thresholding (TcwlST) algorithm for remote sensing image restoration. This algorithm can be used to deal with various linear inverse problems (LIPs), including image deconvolution and reconstruction. This algorithm is a new version of the famous two-step iterative shrinkage/thresholding (TWIST) algorithm. First, we use the split Bregrnan Rudin-Osher-Fatemi (ROF) model, based on a sparse dictionary, to decompose the image into cartoon and texture parts, which are represented by wavelet and contourlet, respectively. Second, we use an adaptive method to estimate the regularization parameter and the shrinkage threshold. Finally, we use a linear search method to find a step length and a fast method to accelerate convergence. Results show that our method can achieve a signal-to-noise ratio improvement (ISNR) for image restoration and high convergence speed.  相似文献   

4.
This paper presents a novel blind source separation algorithm integrating the estimation of the probability density function with the fixed-point algorithm. Firstly, the kernel function is constructed by the radial basis function; then the sparse representation of the probability density function of the mixed signals are established, this sparse representation is based on the support vector machines recursion method of neural network theory, thus the closed form expression of the probability density function is obtained; finally, a new estimation method of the activation function is put forward, combining the Fast ICA with the estimation method, we can get a new algorithm of blind source separation. The simulation results have verified that the algorithm can successfully separate the mixed sub-Gaussian and super-Gaussian source signals, and the performance of the algorithm is excellent.  相似文献   

5.
This paper presents a novel depth estimation method based on feature points. Two points are selected arbitrarily from an object and their distance in the space is assumed to be known.The proposed technique can estimate simultaneously their depths according to two images taken before and after a camera moves and the motion parameters of the camera may be unknown. In addition, this paper analyzes the ways to enhance the precision of the estimated depths and presents a feature point image coordinates search algorithm to increase the robustness of the proposed method.The search algorithm can find automatically more accurate image coordinates of the feature points based on their detected image coordinates. Experimental results demonstrate the efficiency of the presented method.  相似文献   

6.
Remote sensing image fusion based on Bayesian linear estimation   总被引:1,自引:0,他引:1  
A new remote sensing image fusion method based on statistical parameter estimation is proposed in this paper. More specially, Bayesian linear estimation (BLE) is applied to observation models between remote sensing images with different spa- tial and spectral resolutions. The proposed method only estimates the mean vector and covariance matrix of the high-resolution multispectral (MS) images, instead of assuming the joint distribution between the panchromatic (PAN) image and low-resolution multispectral image. Furthermore, the proposed method can enhance the spatial resolution of several principal components of MS images, while the traditional Principal Component Analysis (PCA) method is limited to enhance only the first principal component. Experimental results with real MS images and PAN image of Landsat ETM demonstrate that the proposed method performs better than traditional methods based on statistical parameter estimation, PCA-based method and wavelet-based method.  相似文献   

7.
In this paper we introduce an image-based virtual exhibition system especially for clothing product.It can provide a powerful material substitution function,which is very useful for customization clothing-built.A novel color substitution algorithm and two texture morphing methods are designed to ensure realistic substitution result.To extend it to 3D,we need to do the model reconstruction based on photos.Thus we present an improved method for modeling human body.It deforms a generic model with shape details extracted from pictures to generate a new model.Our method begins with model image generation followed by silhouette extraction and segmentation.Then it builds a mapping between pixeis inside every pair of silhouette segments in the model image and in the picture.Our mapping algorithm is based on a slice space representation that conforms to the natural features of human body.  相似文献   

8.
Markov random fields (MRFs) can be used for a wide variety of vision problems. In this paper we will propose a Markov random field (MRF) image segmentation model. The theoretical framework is based on Bayesian estimation via the energy optimization. Graph cuts have emerged as a powerful optimization technique for minimizing energy functions that arise in low-level vision problem. The theorem of Ford and Fulkerson states that min-cut and max-flow problems are equivalent. So, the minimum s/t cut problem can be solved by finding a maximum flow from the source s to the sink t. we adopt a new min-cut/max-flow algorithm which belongs to the group of algorithms based on augmenting paths. We propose a parameter estimation method using expectation maximization (EM) algorithm. We also choose Gaussian mixture model as our image model and model the density associated with one of image segments (or classes) as a multivariate Gaussian distribution. Characteristic features related to the information in color, texture and position are extracted for each pixel. Experimental results will be provided to illustrate the performance of our method.  相似文献   

9.
For classifying large data sets, we propose a discriminant kernel that introduces a nonlinear mapping from the joint space of input data and output label to a discriminant space. Our method differs from traditional ones, which correspond to map nonlinearly from the input space to a feature space. The induced distance of our discriminant kernel is Eu- clidean and Fisher separable, as it is defined based on distance vectors of the feature space to distance vectors on the discriminant space. Unlike the support vector machines or the kernel Fisher discriminant analysis, the classifier does not need to solve a quadric program- ming problem or eigen-decomposition problems. Therefore, it is especially appropriate to the problems of processing large data sets. The classifier can be applied to face recognition, shape comparison and image classification benchmark data sets. The method is significantly faster than other methods and yet it can deliver comparable classification accuracy.  相似文献   

10.
In this paper we present a new image zooming algorithm based on surface fitting with edge constraint.In surface fitting,we consider not only the relationship of corresponding pixels between the original image and the enlarged image,but also the neighbor pixels in the enlarged image according to the local structure of original image.Furthermore,during surface fitting,more interpolation constraints are used in the new algorithm for improving the precision of the super sampling pixels.The experimental results show that the new method outperforms the previous methods which based on surface fitting.  相似文献   

11.
针对目前基于稀疏表示的图像盲卷积算法细节恢复有限等问题,提出一种基于稀疏表示和梯度先验的图像盲卷积算法。虽然每个图像块可以通过字典稀疏表示,但是图像块重构出的图像常常出现“伪像”,本文将梯度先验知识和超拉普拉斯先验知识融入稀疏表示盲卷积模型中,采用迭代方法交替估计中间清晰图像和模糊核,一旦获得模糊核,采用超拉普拉斯非盲去卷积算法恢复出最终的清晰图像。实验结果表明,与其他去模糊算法相比,本文算法在抑制振铃方面效果显著。  相似文献   

12.
针对标准化稀疏先验的正则化方法估计复杂模糊核时的不准确性, 引入图像的预处理, 提出了一种图像盲去模糊的新方法。该方法将图像盲去模糊分为三个步骤:利用双边滤波器和冲击滤波器对图像进行预处理, 使得图像的噪声降低、边缘突出, 有利于模糊核的估计; 对预处理后的图像, 利用基于标准化稀疏先验的正则化方法估计模糊核; 根据估计出的模糊核利用TV正则化方法对图像进行非盲去卷积。采用快速迭代收缩阈值算法和快速总变分图像复原算法分别求解模糊核估计模型和图像非盲去卷积模型。实验结果表明, 针对单幅模糊图像, 该方法可以估计出准确的模糊核, 对噪声具有鲁棒性, 并且提高了图像复原速度, 具有较好的图像恢复效果。  相似文献   

13.
在图像去模糊问题中,显著边缘结构对图像的模糊核估计具有重要的作用.本文提出一种基于深度编码-解码器的图像模糊核估计算法.首先,通过构建训练数据集对深度编码-解码器进行训练,进而自适应地获得模糊图像的显著边缘结构;接着,结合显著边缘结构和模糊图像,利用L2范数正则化对模糊核进行估计;最后,利用超拉普拉斯先验和所估计的模糊核对清晰图像进行估计.与传统的方法相比,所提出的方法不需要多尺度迭代框架.实验结果表明,所提出的算法在获得较好的显著边缘结构以及清晰图像的同时,能够减少算法计算的时间.  相似文献   

14.
目的 图像盲复原是图像处理中的常见的重要问题之一,具有巨大的研究价值和广泛的应用。通常情况下,相机抖动,聚焦不准,环境噪声等因素都会造成图像模糊。由于图像盲复原需要同时求解模糊核和清晰图像,导致该问题是病态的而难于求解。现有的盲复原方法可以分为两大类,一类是基于最大后验概率来同时估计潜在图像和模糊核的方法,但是这样耦合在一起的方法由于先验条件和初值设置不恰当,常常会导致最终求得的是问题的平凡解,以至于盲复原的效果并不理想。另一类是基于变分贝叶斯来估计模糊核,这种方法通常是采用最大化强边图像的边缘概率,由此估计的模糊核鲁棒性较强,但是对潜在图像的强边条件要求比较高,计算复杂度和实现难度都较大。鉴于以上方法的优缺点,提出基于高阶微分方程学习的方法来实现图像去模糊。方法 借鉴传统的迭代演化方法和网络学习方法各自的优势,将网络学习到的特征(引导图像,卷积滤波器,稀疏测度)融入到高阶微分方程的演化过程中区,提出可学习的基于高阶微分方程的演化来模拟图像的演化过程。具体地,先用范数约束得到一个粗略的强边引导图像,然后将学习到的卷积滤波器和稀疏函数一起作用在当前的潜在图像上,得到一个关于图像的更好的梯度下降方向,将此作为微分方程演化的一个步骤,得到一个更为精炼的强边图像。最后用精炼的强边图像来估计模糊核。该方法可以通过先验知识和训练数据来有效地控制模糊核的估计,进而得到较为清晰的盲复原结果。结果 在图像建模层面上,用非盲复原的方法验证了本文提出的微分方程演化过程是可行的。通过和其他盲复原方法做对比,在不同的基准图像数据库上的定量的实验中,本文方法在数据库上的峰值信噪比,结构相似度分别达到30.30,0.91,误差率低至1.24;比其他方法的结果都要好,在时间上,虽然我们的算法不是用时最少的,但是和性能相当的本文的方法相比,本文算法时间消耗远比该算法少。在各种不同类型的模糊图像去模糊结果也表明了本文方法是有效的。结论 本文可学习的高阶微分方程去模糊的方法,能够有效地估计模糊核,进而更好地恢复出清晰图像。实验结果表明本文方法在各种场景中具有较高的灵活性,都能自适应地对图像去模糊。  相似文献   

15.
Most state-of-the-art blind image deconvolution methods rely on the Bayesian paradigm to model the deblurring problem and estimate both the blur kernel and latent image. It is customary to model the image in the filter space, where it is supposed to be sparse, and utilize convenient priors to account for this sparsity. In this paper, we propose the use of the spike-and-slab prior together with an efficient variational Expectation Maximization (EM) inference scheme to estimate the blur in the image. The spike-and-slab prior, which constitutes the gold standard in sparse machine learning, selectively shrinks irrelevant variables while mildly regularizing the relevant ones. The proposed variational Expectation Maximization algorithm is more efficient than usual Markov Chain Monte Carlo (MCMC) inference and, also, proves to be more accurate than the standard mean-field variational approximation. Additionally, all the prior model parameters are estimated by the proposed scheme. After blur estimation, a non-blind restoration method is used to obtain the actual estimation of the sharp image. We investigate the behavior of the prior in the experimental section together with a series of experiments with synthetically generated and real blurred images that validate the method's performance in comparison with state-of-the-art blind deconvolution techniques.  相似文献   

16.
余孝源  谢巍  陈定权  周延 《控制与决策》2020,35(7):1667-1673
传统的暗通道先验已成功地运用于单一图像去模糊问题,但是,当模糊图像具有显著噪声时,暗通道先验无法对模糊核估计起到作用.因此,得益于分数阶计算能够有效地抑制信号的噪声并对信号的低频部分进行增强,将分数阶计算理论与模糊图像的暗通道先验相结合,提出一种基于改进的暗通道先验的运动模糊核估计方法.首先,结合最大后验估计算法与分数阶暗通道先验,构建出运动模糊图像的核估计模型;其次,利用半二次方分裂法解决模型的非凸问题;最后,根据粗糙-精细的策略,利用多尺度迭代框架估计出准确图像的模糊核,进而利用非盲去模糊的方法求解清晰图像.实验结果表明:在有无显著噪声的模糊图像中,所提出的算法虽然所需计算时间较长,但是能够获得较为准确的模糊核,并且能够减少图像噪声以及振铃伪影,提高清晰图像估计的质量;此外,对于不同类型的模糊图像,所提出的算法也同样适用.  相似文献   

17.
基于好的还原图像是倾向于清晰图像而不是模糊图像这样一个事实,提出了一种基于多种先验的有效的盲图像去模糊方法。目前比较好的去模糊方法对于特定场景图像的复原效果不理想,存在一些模糊,包括轮廓和细节表示不清晰。为解决这些问题,结合多个先验知识,包括暗通道先验、强度图像先验和梯度图像先验知识,并加以权衡,就可以在复原过程中为轮廓和细节提供更多的先验信息,并把这个先验知识放到MAP的框架中,通过不断地迭代得到估计模糊核,再利用非盲的图像复原方法对原图像复原。在泛化处理自然环境的多种场景中,本文方法相较于目前比较先进的方法,结果的轮廓和细节都有不错的提升。  相似文献   

18.
The full-image based kernel estimation strategy is usually susceptible by the smooth and fine-scale background regions impacting and it is time-consuming for large-size image deblurring. Since not all the pixels in the blurred image are informative and it is frequent to restore human-interested objects in the foreground rather than background, we propose a novel concept “SalientPatch” to denote informative regions for better blur kernel estimation without user guidance by computing three cues (objectness probability, structure richness and local contrast). Although these cues are not new, it is innovative to integrate and complement each other in motion blur restoration. Experiments demonstrate that our SalientPatch-based deblurring algorithm can significantly speed up the kernel estimation and guarantee high-quality recovery for large-size blurry images as well.  相似文献   

19.
Motion blur is a common problem in digital photography. In the dim light, a long exposure time is needed to acquire a satisfactory photograph, and if the camera shakes during exposure, a motion blur is captured. Image deblurring has become a crucial image-processing challenge, because of the increased popularity of handheld cameras. Traditional motion deblurring methods assume that the blur degradation is shift-invariant; therefore, the deblurring problem can be reduced to a deconvolution problem. Edge-specific motion deblurring sharpened the strong edges of the image and then used them to estimate the blur kernel. However, this also enhanced noise and narrow edges, which cause ambiguity and ringing artifacts. We propose a hybrid-based single image motion deblurring algorithm to solve these problems. First, we separated the blurred image into strong edge parts and smooth parts. We applied the improved patch-based sharpening method to enhance the strong edge for kernel estimation, but for the smooth part, we used the bilateral filter to remove the narrow edge and the noise for avoiding the generation of ringing artifacts. Experimental results show that the proposed method is efficient at deblurring for a variety of images and can produce images of a quality comparable to other state-of-the-art techniques.  相似文献   

20.
In this paper we propose a space-variant blur estimation and effective denoising/deconvolution method for combining a long exposure blurry image with a short exposure noisy one. The blur in the long exposure shot is mainly caused by camera shake or object motion, and the noise in the underexposed image is introduced by the gain factor applied to the sensor when the ISO is set to an high value. Due to the space variant degradation, the image pair is divided into overlapping patches for processing. The main idea in the deconvolution algorithm is to incorporate a combination of prior image models into a spatially-varying deblurring/denoising framework which is applied to each patch. The method employs a kernel and parameter estimation method to choose between denoising or deblurring each patch. Experiments on both synthetic and real images are provided to validate the proposed approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号