首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
At the present time, block-transform coding is probably the most popular approach for image compression. For this approach, the compressed images are decoded using only the transmitted transform data. We formulate image decoding as an image recovery problem. According to this approach, the decoded image is reconstructed using not only the transmitted data but, in addition, the prior knowledge that images before compression do not display between-block discontinuities. A spatially adaptive image recovery algorithm is proposed based on the theory of projections onto convex sets. Apart from the data constraint set, this algorithm uses another new constraint set that enforces between-block smoothness. The novelty of this set is that it captures both the local statistical properties of the image and the human perceptual characteristics. A simplified spatially adaptive recovery algorithm is also proposed, and the analysis of its computational complexity is presented. Numerical experiments are shown that demonstrate that the proposed algorithms work better than both the JPEG deblocking recommendation and our previous projection-based image decoding approach.  相似文献   

2.
A blind/no-reference (NR) method is proposed in this paper for image quality assessment (IQA) of the images compressed in discrete cosine transform (DCT) domain. When an image is measured by structural similarity (SSIM), two variances, i.e. mean intensity and variance of the image, are used as features. However, the parameters of original copies are actually unavailable in NR applications; hence SSIM is not widely applicable. To extend SSIM in general cases, we apply Gaussian model to fit quantization noise in spatial domain, and directly estimate noise distribution from the compressed version. Benefit from this rearrangement, the revised SSIM does not require original image as the reference. Heavy compression always results in some zero-value DCT coefficients, which need to be compensated for more accurate parameter estimate. By studying the quantization process, a machine-learning based algorithm is proposed to estimate quantization noise taking image content into consideration. Compared with state-of-the-art algorithms, the proposed IQA is more heuristic and efficient. With some experimental results, we verify that the proposed algorithm (provided no reference image) achieves comparable efficacy to some full reference (FR) methods (provided the reference image), such as SSIM.  相似文献   

3.
We propose a high-resolution image reconstruction algorithm considering inaccurate subpixel registration. A regularized iterative reconstruction algorithm is adopted to overcome the ill-posedness problem resulting from inaccurate subpixel registration. In particular, we use multichannel image reconstruction algorithms suitable for applications with multiframe environments. Since the registration error in each low-resolution image has a different pattern, the regularization parameters are determined adaptively for each channel. We propose two methods for estimating the regularization parameter automatically. The proposed algorithms are robust against registration error noise, and they do not require any prior information about the original image or the registration error process. Information needed to determine the regularization parameter and to reconstruct the image is updated at each iteration step based on the available partially reconstructed image. Experimental results indicate that the proposed algorithms outperform conventional approaches in terms of both objective measurements and visual evaluation.  相似文献   

4.
卢琪 《信息技术》2009,33(8):53-56,60
数字水印技术已经成为信号处理领域的一个新的研究热点.首先介绍了数字图像水印技术,以及DCT算法的相关知识,并提出了一种新的数字水印加密算法,该算法可以根据图像大小产生相应的水印信息.利用Matlab对两幅图像进行了水印嵌入和提取的仿真实验,验证了该水印算法的有效性.  相似文献   

5.
In a spatially adaptive subsampling scheme, the subsampling lattice is adapted to the local spatial frequency content of an image sequence. In this paper, we use rate-distortion theory to show that spatially adaptive subsampling gives a better performance than subsampling with a fixed sampling lattice. A new algorithm that optimally assigns sampling lattices to different parts of the image is presented. The proposed spatially adaptive subsampling method can be applied within a motion-compensated coding scheme as well. Experiments show an increased performance over fixed lattice subsampling.  相似文献   

6.
In this paper, a sampling adaptive for block compressed sensing with smooth projected Landweber based on edge detection (SA-BCS-SPL-ED) image reconstruction algorithm is presented. This algorithm takes full advantage of the characteristics of the block compressed sensing, which assigns a sampling rate depending on its texture complexity of each block. The block complexity is measured by the variance of its texture gradient, big variance with high sampling rates and small variance with low sampling rates. Meanwhile, in order to avoid over-sampling and sub-sampling, we set up the maximum sampling rate and the minimum sampling rate for each block. Through iterative algorithm, the actual sampling rate of the whole image approximately equals to the set up value. In aspects of the directional transforms, discrete cosine transform (DCT), dual-tree discrete wavelet transform (DDWT), discrete wavelet transform (DWT) and Contourlet (CT) are used in experiments. Experimental results show that compared to block compressed sensing with smooth projected Landweber (BCS-SPL), the proposed algorithm is much better with simple texture images and even complicated texture images at the same sampling rate. Besides, SA-BCS-SPL-ED-DDWT is quite good for the most of images while the SA-BCS-SPL-ED-CT is likely better only for more-complicated texture images.  相似文献   

7.
Spatially adaptive wavelet-based multiscale image restoration   总被引:9,自引:0,他引:9  
In this paper, we present a new spatially adaptive approach to the restoration of noisy blurred images, which is particularly effective at producing sharp deconvolution while suppressing the noise in the flat regions of an image. This is accomplished through a multiscale Kalman smoothing filter applied to a prefiltered observed image in the discrete, separable, 2-D wavelet domain. The prefiltering step involves constrained least-squares filtering based on optimal choices for the regularization parameter. This leads to a reduction in the support of the required state vectors of the multiscale restoration filter in the wavelet domain and improvement in the computational efficiency of the multiscale filter. The proposed method has the benefit that the majority of the regularization, or noise suppression, of the restoration is accomplished by the efficient multiscale filtering of wavelet detail coefficients ordered on quadtrees. Not only does this lead to potential parallel implementation schemes, but it permits adaptivity to the local edge information in the image. In particular, this method changes filter parameters depending on scale, local signal-to-noise ratio (SNR), and orientation. Because the wavelet detail coefficients are a manifestation of the multiscale edge information in an image, this algorithm may be viewed as an "edge-adaptive" multiscale restoration approach.  相似文献   

8.
In this paper, we propose a novel learning-based image restoration scheme for compressed images by suppressing compression artifacts and recovering high frequency (HF) components based upon the priors learnt from a training set of natural images. The JPEG compression process is simulated by a degradation model, represented by the signal attenuation and the Gaussian noise addition process. Based on the degradation model, the input image is locally filtered to remove Gaussian noise. Subsequently, the learning-based restoration algorithm reproduces the HF component to handle the attenuation process. Specifically, a Markov-chain based mapping strategy is employed to generate the HF primitives based on the learnt codebook. Finally, a quantization constraint algorithm regularizes the reconstructed image coefficients within a reasonable range, to prevent possible over-smoothing and thus ameliorate the image quality. Experimental results have demonstrated that the proposed scheme can reproduce higher quality images in terms of both objective and subjective quality.  相似文献   

9.
Spatially adaptive multiplicative noise image denoising technique   总被引:5,自引:0,他引:5  
A new image denoising technique in the wavelet transform domain for multiplicative noise is presented. Unlike most existing techniques, this approach does not require prior modeling of either the image or the noise statistics. It uses the variance of the detail wavelet coefficients to decide whether to smooth or to preserve these coefficients. The approach takes advantage of wavelet transform property in generating three detail subimages each providing specific information with certain feature directivity. This allows the ability to combine information provided by different detail subimages to direct the filtering operation. The algorithm uses the hypothesis test based on the F-distribution to decide whether detail wavelet coefficients are due to image related features or they are due to noise. The effectiveness of the proposed technique is tested for orthogonal as well as biorthogonal mother wavelets in order to study the effect of the smoothing process under different wavelet types.  相似文献   

10.
This paper proposes an image authentication scheme which detects illegal modifications for image vector quantization (VQ). In the proposed scheme, the index table is divided into non-overlapping index blocks. The authentication data is generated by using the pseudo random sequence. Our scheme can adaptively determine both the size of the authentication data and the number of the indices in each index block. Then, the selected indices are used to embed the secret data to generate the embedded image.To authenticate the given VQ compressed image, two sets of the authentication data are needed to perform the tamper detection process. One set is generated by using the pseudo random number sequence. The other set is extracted from the compressed image. The experimental results demonstrate that the proposed scheme achieves acceptable image quality of the embedded image while keeping good detecting accuracy.  相似文献   

11.
Moving object segmentation in DCT-based compressed video   总被引:2,自引:0,他引:2  
A block-based automatic segmentation algorithm has been developed for detecting and tracking moving objects in DCT-based compressed video. The proposed algorithm segments moving objects with block resolution using the stochastic behaviour of the image blocks in the DCT domain  相似文献   

12.
High-resolution images are often desired but made impossible because of hardware limitations. For the high-resolution model proposed by Bose and Boo (see Int. J. Imaging Syst. Technol., vol.9, p.294-304, 1998), the iterative wavelet-based algorithm has been shown to perform better than the traditional least square method when the resolution ratio M is two and four. In this paper, we discuss the minimally supported biorthogonal wavelet system that comes from the mathematical model by Bose and Boo and propose a wavelet-based algorithm for arbitrary resolution ratio M/spl ges/2. The numerical results indicate that the algorithm based on our biorthogonal wavelet system performs better in high-resolution image reconstruction than the wavelet-based algorithm in the literature, as well as the common-used least square method.  相似文献   

13.
Sidelobe artifacts are a common problem in image reconstruction from finite-extent Fourier data. Conventional shift-invariant windows reduce sidelobe artifacts only at the expense of worsened mainlobe resolution. Spatially variant apodization (SVA) was previously introduced as a means of reducing sidelobe artifacts, while preserving mainlobe resolution. Although the algorithm has been shown to be effective in synthetic aperture radar (SAR), it is heuristically motivated and it has received somewhat limited analysis. In this paper, we show that SVA is a version of minimum-variance spectral estimation (MVSE). We then present a complete development of the four types of two-dimensional SVA for image reconstruction from partial Fourier data. We provide simulation results for various real-valued and complex-valued targets and point out some of the limitations of SVA. Performance measures are presented to help further evaluate the effectiveness of SVA.  相似文献   

14.
We propose a robust, object-based approach to high-resolution image reconstruction from video using the projections onto convex sets (POCS) framework. The proposed method employs a validity map and/or a segmentation map. The validity map disables projections based on observations with inaccurate motion information for robust reconstruction in the presence of motion estimation errors; while the segmentation map enables object-based processing where more accurate motion models can be utilized to improve the quality of the reconstructed image. Procedures for the computation of the validity map and segmentation map are presented. Experimental results demonstrate the improvement in image quality that can be achieved by the proposed methods.  相似文献   

15.
陈善学  胡灿  屈龙瑶 《电讯技术》2016,56(7):717-723
针对现有的高光谱图像压缩感知重构算法对图像的空谱特性利用不够充分,导致重构图像质量不够高的问题,提出了一种高光谱图像变投影率分块压缩感知结合优化谱间预测重构方案。编码端以频段聚类方式将高光谱图像的所有频段分成参考频段和普通频段,对不同频段单独采用不同精度分块压缩感知以获取高光谱数据。在解码端,参考频段直接采用稀疏度自适应匹配追踪( SAMP)算法重构,对于普通频段,则设计了一种优化谱间预测结合SAMP算法的新模型进行重构:首先通过重构的参考频段双向预测普通频段,并对其进行压缩投影,然后计算预测前后普通频段投影值的残差,最后利用SAMP算法重构该残差,以此修正预测值。实验表明,相比同类算法,该算法充分考虑了高光谱图像的空谱特性,有效改善了重构图像质量,且编码复杂度低,易于硬件实现。  相似文献   

16.
Saliency detection in the compressed domain for adaptive image retargeting   总被引:2,自引:0,他引:2  
Saliency detection plays important roles in many image processing applications, such as regions of interest extraction and image resizing. Existing saliency detection models are built in the uncompressed domain. Since most images over Internet are typically stored in the compressed domain such as joint photographic experts group (JPEG), we propose a novel saliency detection model in the compressed domain in this paper. The intensity, color, and texture features of the image are extracted from discrete cosine transform (DCT) coefficients in the JPEG bit-stream. Saliency value of each DCT block is obtained based on the Hausdorff distance calculation and feature map fusion. Based on the proposed saliency detection model, we further design an adaptive image retargeting algorithm in the compressed domain. The proposed image retargeting algorithm utilizes multioperator operation comprised of the block-based seam carving and the image scaling to resize images. A new definition of texture homogeneity is given to determine the amount of removal block-based seams. Thanks to the directly derived accurate saliency information from the compressed domain, the proposed image retargeting algorithm effectively preserves the visually important regions for images, efficiently removes the less crucial regions, and therefore significantly outperforms the relevant state-of-the-art algorithms, as demonstrated with the in-depth analysis in the extensive experiments.  相似文献   

17.
管春  陶勃宇 《电讯技术》2017,57(9):981-985
针对图像稀疏重建中因使用固定参数的全变分(TV)正则项所带来的图像细节缺失和阶梯效应问题,提出了一种自适应二阶广义全变分(TGV)约束的图像稀疏重建算法.该算法采用二阶广义全变分模型权衡图像的一阶导数和二阶导数,且能够根据每次迭代得到的重构解及对应张量函数自适应地修正权重系数,实现图像的稀疏重建.与全变分正则模型和固定参数广义全变分正则模型相比,该算法能更好地保持图像轮廓和细节信息,提高重建图像的峰值信噪比(PSNR)和结构相似度(SSIM).  相似文献   

18.
We consider the estimation of the unknown parameters for the problem of reconstructing a high-resolution image from multiple undersampled, shifted, degraded frames with subpixel displacement errors. We derive mathematical expressions for the iterative calculation of the maximum likelihood estimate of the unknown parameters given the low resolution observed images. These iterative procedures require the manipulation of block-semi circulant (BSC) matrices, that is, block matrices with circulant blocks. We show how these BSC matrices can be easily manipulated in order to calculate the unknown parameters. Finally the proposed method is tested on real and synthetic images.  相似文献   

19.
单幅高分辨率SAR图像建筑物三维模型重构   总被引:1,自引:0,他引:1  
提出了一种利用高分辨率SAR图像进行建筑物提取和三维重构的方法.首先,分析了高分辨率SAR图像建筑物产生的电磁散射的类型,给出了不同类型散射区域的后向散射计算方法,并在此基础上给出了一种利用建筑物三维CAD模型进行SAR建筑物特征区域图像仿真的方法;其次,给出了利用建筑物的二次散射结构确定建筑物底部轮廓位置和方向的方法,并提出了一种基于分布密度函数差异的仿真图像迭代匹配方法,进行建筑物高度的反演.仿真SAR图像后向散射系数用来划分建筑物不同的散射区域,通过计算特征区域之间的分布密度函数差异,以取得最大匹配度值的仿真图像对应的检验高度作为建筑物的反演高度;最后,选用了两幅不同屋顶类型的实际机载高分辨率SAR图像进行建筑物提取和三维重构实验,试验结果较为理想,验证了所提方法的可行性和有效性.  相似文献   

20.
基于压缩感知的图像压缩抗干扰重构算法   总被引:6,自引:6,他引:0  
针对传统图像变换压缩方法压缩的图像经无线信道传输时受高斯随机干扰导致重要变换系数失真出现重构图像局部内容缺失的现象,本文根据压缩感知(CS)信号分量具有同等重要性的特性,理论分析了去除失真CS信号分量以抵御干扰的可行性,提出一种基于CS的图像压缩抗干扰重构算法。算法首先假定已知受高斯随机干扰的比特所对应的CS信号分量的位置,然后根据这些位置确定新的CS信号和重构矩阵,再进行阈值迭代重构。仿真结果表明,本文算法在低误码率(BER)下得到精确重构的图像,在高BER下得到图像内容无缺失仅全局质量小幅下降的重构图像。因此,基于CS的图像压缩抗干扰重构算法能够较好地克服变换压缩方法以及阈值迭代重构算法抗干扰能力低的不足,从而为图像无线传输抗高斯随机干扰问题提供一种可行的解决方案。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号