首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Reversible integer wavelet transforms are increasingly popular in lossless image compression, as evidenced by their use in the recently developed JPEG2000 image coding standard. In this paper, a projection-based technique is presented for decreasing the first-order entropy of transform coefficients and improving the lossless compression performance of reversible integer wavelet transforms. The projection technique is developed and used to predict a wavelet transform coefficient as a linear combination of other wavelet transform coefficients. It yields optimal fixed prediction steps for lifting-based wavelet transforms and unifies many wavelet-based lossless image compression results found in the literature. Additionally, the projection technique is used in an adaptive prediction scheme that varies the final prediction step of the lifting-based transform based on a modeling context. Compared to current fixed and adaptive lifting-based transforms, the projection technique produces improved reversible integer wavelet transforms with superior lossless compression performance. It also provides a generalized framework that explains and unifies many previous results in wavelet-based lossless image compression.  相似文献   

2.
In this paper, an adaptive predictive multiplicative autoregressive (APMAR) method is proposed for lossless medical image coding. The adaptive predictor is used for improving the prediction accuracy of encoded image blocks in our proposed method. Each block is first adaptively predicted by one of the seven predictors of the JPEG lossless mode and a local mean predictor. It is clear that the prediction accuracy of an adaptive predictor is better than that of a fixed predictor. Then the residual values are processed by the MAR model with Huffman coding. Comparisons with other methods [MAR, SMAR, adaptive JPEG (AJPEG)] on a series of test images show that our method is suitable for reversible medical image compression.  相似文献   

3.
A new segmentation-based lossless compression method is proposed for colour images. The method exploits the correlation existing among the three colour planes by treating each pixel as a vector of three components, and performing region growing and difference operations using the vectors. The method performs better than the JPEG standard by an average of 0.68 bit/pixel with a 12 image database  相似文献   

4.
In this paper, we address issues concerning bilevel image compression using JPEG2000. While JPEG2000 is designed to compress both bilevel and continuous tone image data using a single unified framework, there exist significant limitations with respect to its use in the lossless compression of bilevel imagery. In particular, substantial degradation in image quality at low resolutions severely limits the resolution scalable features of the JPEG2000 code-stream. We examine these effects and present two efficient methods to improve resolution scalability for bilevel imagery in JPEG2000. By analyzing the sequence of rounding operations performed in the JPEG2000 lossless compression pathway, we introduce a simple pixel assignment scheme that improves image quality for commonly occurring types of bilevel imagery. Additionally, we develop a more general strategy based on the JPIP protocol, which enables efficient interactive access of compressed bilevel imagery. It may be noted that both proposed methods are fully compliant with Part 1 of the JPEG2000 standard.   相似文献   

5.
一种基于小波系数上下文模型的图像压缩方法   总被引:5,自引:3,他引:2  
提出了二种新颖的基于小波系数上下文模型的图像压缩方法,该方法通过量化当前系数的线性预测值形成上下文.进行自适应的算术编码;同时利用了小波变换的多分辨率性质,以渐近分辨率的方式压缩图片,具有分辨率可扩展性。实验结果表明,该方法获得的无损压缩比高于SPIHT和用于JPEG2000的EBCOT,在各分辨率下的压缩比也高于EBCOT.压缩时间也比EBCOT要少。  相似文献   

6.
This paper introduces a lossless color filter array (CFA) image compression scheme capable of handling high dynamic range (HDR) representation. The proposed pipeline consists of a series of pre-processing operations followed by a JPEG XR encoding module. A deinterleaving step separates the CFA image to sub-images of a single color channel, and each sub-image is processed by a proposed weighted template matching prediction. The utilized JPEG XR codec allows the compression of HDR data at low computational cost. Extensive experimentation is performed using sample test HDR images to validate performance and the proposed pipeline outperforms existing lossless CFA compression solutions in terms of compression efficiency.  相似文献   

7.
For some classes of signals, particularly those dominated by low frequency components, such as seismic data first and higher order differences between adjacent signal samples are generally smaller compared with the signal samples. In this paper, evaluating the differencing approach for losslessly compressing several classes of seismic signals is given. Three different approaches employing derivatives are developed and applied. The performance of the techniques presented and the adaptive linear predictor are evaluated and compared for the lossless compression of different seismic signal classes. The proposed differentiator approach yields comparable residual energy compared with that obtained employing the linear predictor technique. The two main advantages of the differentiation method are: (1) the coefficients are fixed integers which do not have to be encoded; and (2) greatly reduced computational complexity, relative to the existing algorithms. These advantages are particularly attractive for real time processing. They have been confirmed experimentally by compressing different seismic signals. Sample results including the compression ratio, i.e., the ratio of the number of bits per sample without compression to those with compression using arithmetically encoded residues are also given  相似文献   

8.
A novel perceptually lossless coder is presented for the compression of medical images. Built on the JPEG 2000 coding framework, the heart of the proposed coder is a visual pruning function, embedded with an advanced human vision model to identify and to remove visually insignificant/irrelevant information. The proposed coder offers the advantages of simplicity and modularity with bit-stream compliance. Current results have shown superior compression ratio gains over that of its information lossless counterparts without any visible distortion. In addition, a case study consisting of 31 medical experts has shown that no perceivable difference of statistical significance exists between the original images and the images compressed by the proposed coder.  相似文献   

9.
针对基于预测的高光谱图像无损压缩算法压缩比低的问题,该文将聚类算法与高光谱图像预测压缩算法相结合,提出一种基于K-均值聚类和传统递归最小二乘法的高光谱图像无损压缩算法。首先,对高光谱图像按光谱矢量进行K-均值聚类以提升同类光谱矢量间的相似度。然后,对每一聚类群分别使用传统递归最小二乘法进行预测,消除高光谱图像的空间冗余和谱间冗余。最后,对预测误差图像进行算术编码,完成高光谱图像压缩过程。对AVIRIS 2006高光谱数据进行仿真实验,所提算法对16位校正图像、16位未校正图像和12位未校正图像分别取得了4.63倍,2.82倍和4.77倍的压缩比,优于同类型已报道的各种算法。  相似文献   

10.
A lossless compression scheme for Bayer color filter array images.   总被引:1,自引:0,他引:1  
In most digital cameras, Bayer color filter array (CFA) images are captured and demosaicing is generally carried out before compression. Recently, it was found that compression-first schemes outperform the conventional demosaicing-first schemes in terms of output image quality. An efficient prediction-based lossless compression scheme for Bayer CFA images is proposed in this paper. It exploits a context matching technique to rank the neighboring pixels when predicting a pixel, an adaptive color difference estimation scheme to remove the color spectral redundancy when handling red and blue samples, and an adaptive codeword generation technique to adjust the divisor of Rice code for encoding the prediction residues. Simulation results show that the proposed compression scheme can achieve a better compression performance than conventional lossless CFA image coding schemes.  相似文献   

11.
陈军波  陈亚光 《电视技术》2004,(4):69-71,74
讨论了JPEG2000的相关核心算法,给出了图像压缩编码系统的硬件设计,并在DSP系统上按JPEG2000标准组织压缩码流.实验结果表明,按JPEG2000标准得到的医学压缩图像具有理想的压缩效果,满足医学图像处理的特定要求.  相似文献   

12.
Wavelet-based lossless compression of coronary angiographic images   总被引:6,自引:0,他引:6  
The final diagnosis in coronary angiography has to be performed on a large set of original images. Therefore, lossless compression schemes play a key role in medical database management and telediagnosis applications. This paper proposes a wavelet-based compression scheme that is able to operate in the lossless mode. The quantization module implements a new way of coding of the wavelet coefficients that is more effective than the classical zerotree coding. The experimental results obtained on a set of 20 angiograms show that the algorithm outperforms the embedded zerotree coder, combined with the integer wavelet transform, by 0.38 bpp, the set partitioning coder by 0.21 bpp, and the lossless JPEG coder by 0.71 bpp. The scheme is a good candidate for radiological applications such as teleradiology and picture archiving and communications systems (PACS's).  相似文献   

13.
Revisiting the JPEG-LS prediction scheme   总被引:6,自引:0,他引:6  
The authors investigate the prediction scheme of JPEG-LS, the latest JPEG standard for lossless/near lossless image compression. They show that it is not sufficient to consider only horizontal and vertical edges in constructing predictive values. As a result, they propose an additional diagonal edge detection scheme to achieve better prediction accuracy and hence provide potential for further improvement. Experiments show that, in terms of mean-square-error values, the proposed scheme outperforms the existing JPEG-LS prediction for all images tested, while the complexity of the overall algorithm is maintained at a similar level  相似文献   

14.
In this paper, we present a comprehensive approach for investigating JPEG compressed test images, suspected of being tampered either by splicing or copy-move forgery (cmf). In JPEG compression, the image plane is divided into non-overlapping blocks of size 8 × 8 pixels. A unified approach based on block-processing of JPEG image is proposed to identify whether the image is authentic/forged and subsequently localize the tampered region in forged images. In the initial step, doubly stochastic model (dsm) of block-wise quantized discrete cosine transform (DCT) coefficients is exploited to segregate authentic and forged JPEG images from a standard dataset (CASIA). The scheme is capable of identifying both the types of forged images, spliced as well as copy-moved. Once the presence of tampering is detected, the next step is to localize the forged region according to the type of forgery. In case of spliced JPEG images, the tampered region is localized using block-wise correlation maps of dequantized DCT coefficients and its recompressed version at different quality factors. The scheme is able to identify the spliced region in images tampered by pasting uncompressed or JPEG image patch on a JPEG image or vice versa, with all possible combinations of quality factors. Alternatively, in the case of copy-move forgery, the duplication regions are identified using highly localized phase congruency features of each block. Experimental results are presented to consolidate the theoretical concept of the proposed technique and the performance is compared with the already existing state of art methods.  相似文献   

15.
Lossless compression techniques are essential in archival and communication of medical images. Here, a new segmentation-based lossless image coding (SLIC) method is proposed, which is based on a simple but efficient region growing procedure. The embedded region growing procedure produces an adaptive scanning pattern for the image with the help of a very-few-bits-needed discontinuity index map. Along with this scanning pattern, an error image data part with a very small dynamic range is generated. Both the error image data and the discontinuity index map data parts are then encoded by the Joint Bi-level Image Experts Group (JBIG) method. The SLIC method resulted in, on the average, lossless compression to about 1.6 b/pixel from 8 b, and to about 2.9 b/pixel from 10 b with a database of ten high-resolution digitized chest and breast images. In comparison with direct coding by JBIG, Joint Photographic Experts Group (JPEG), hierarchical interpolation (HINT), and two-dimensional Burg prediction plus Huffman error coding methods, the SLIC method performed better by 4% to 28% on the database used  相似文献   

16.
In this paper, the problem of progressive lossless image coding is addressed. A nonlinear decomposition for progressive lossless compression is presented. The decomposition into subbands is called rank-order polynomial decomposition (ROPD) according to the polynomial prediction models used. The decomposition method presented here is a further development and generalization of the morphological subband decomposition (MSD) introduced earlier by the same research group. It is shown that ROPD provides similar or slightly better results than the compared coding schemes such as the codec based on set partitioning in hierarchical trees (SPIHT) and the codec based on wavelet/trellis-coded quantization (WTCQ). Our proposed method highly outperforms the standard JPEG. The proposed lossless compression scheme has the functionality of having a completely embedded bit stream, which allows for data browsing. It is shown that the ROPD has a better lossless rate than the MSD but it has also a much better browsing quality when only a part of the bit stream is decompressed. Finally, the possibility of hybrid lossy/lossless compression is presented using ultrasound images. As with other compression algorithms, considerable gain can be obtained if only the regions of interest are compressed losslessly.  相似文献   

17.
We propose a novel symmetry-based technique for scalable lossless compression of 3D medical image data. The proposed method employs the 2D integer wavelet transform to decorrelate the data and an intraband prediction method to reduce the energy of the sub-bands by exploiting the anatomical symmetries typically present in structural medical images. A modified version of the embedded block coder with optimized truncation (EBCOT), tailored according to the characteristics of the data, encodes the residual data generated after prediction to provide resolution and quality scalability. Performance evaluations on a wide range of real 3D medical images show an average improvement of 15% in lossless compression ratios when compared to other state-of-the art lossless compression methods that also provide resolution and quality scalability including 3D-JPEG2000, JPEG2000, and H.264/AVC intra-coding.   相似文献   

18.
High dynamic range (HDR) image requires a higher number of bits per color channel than traditional images. This brings about problems to storage and transmission. Color space quantization has been extensively studied to achieve bit encodings for each pixel and still yields prohibitively large files. This paper explores the possibility of further compressing HDR images quantized in color space. The compression schemes presented in this paper extends existing lossless image compression standards to encode HDR images. They separate HDR images in their bit encoding formats into images in grayscale or RGB domain, which can be directly compressed by existing lossless compression standards such as JPEG, JPEG 2000 and JPEG-LS. The efficacy of the compression schemes is illustrated by presenting extensive results of encoding a series of synthetic and natural HDR images. Significant bit savings of up to 53% are observed when comparing with original HDR formats and HD Photo compressed version. This is beneficial to the storage and transmission of HDR images.  相似文献   

19.
吴家骥  吴成柯  吴振森 《电子学报》2006,34(10):1828-1832
感兴趣区(ROI)编码是在JPEG2000中提出的一种重要的技术,然而JPEG2000算法却无法同时支持任意形状ROI和任意提升因子.本文提出了一种基于任意形状ROI和3D提升小波零块编码的3D体数据图像压缩算法.新的算法支持ROI内外从有损到无损的编码.一种简单的任意形状无损ROI掩码(Mask)生成方法被提出.考虑到3D子带的特点,我们采用改进的3DSPECK零块算法对变换后的系数进行编码.一些其它支持任意形状ROI编码的算法也在本文中被评估,试验显示本文算法具有更好的编码性能.  相似文献   

20.
A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of integer wavelet transform coefficients and the characteristics of Rice entropy coder, the divide and rule method is used for high-frequency sub-bands and low-frequency one. High-frequency sub-bands are coded by the Rice entropy coder, and low-frequency coefficients are predicted before coding. The role of predictor is to map the low-frequency coefficients into symbols suitable for the entropy coding. Experimental results show that the average Comprcssion Ratio (CR) of our approach is about two, which is close to that of JPEG 2000. The algorithm is simple and easy to be implemented in hardware. Moreover, it has the merits of adaptability, and independent data packet. So the algorithm can adapt to space lossless compression applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号