首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The aim of image compression endeavour is to reduce the total data required to represent the image, which, in turn, decreases the demand of transmission bandwidth and storage space. In this work, we propose an image fusion based idea that can be exploited extensively to reduce the file size of JPEG compressed image further. Before performing the JPEG compression, we compute both intensity and a subsampled colour representation of the image undergoing compression. Then, similar to the JPEG compression, discrete cosine transformation, quantisation and entropy coding processes are applied on these images and stored in a single image file container. In the decoder, these two images are reconstructed and fused to obtain the resultant decoded image. Our experiments show that the proposed method does meet the lower storage and bandwidth requirement by reducing the average bits per pixel of the encoded image than that of the JPEG compressed image.  相似文献   

2.
张雅媛  孔令罔 《包装工程》2016,37(13):189-194
目的结合人眼视觉特性,研究一种基于改进量化表的JPEG图像压缩算法(JPEG-HVS)。方法利用人眼亮度对比度敏感函数(CSF)生成一种新的量化表,来代替传统JPEG标准推荐的亮度量化表,并通过Matlab7.0对不同种类图像进行了仿真实验。通过计算不同种类图像的压缩质量评价指标,将提出的压缩算法与传统JPEG压缩算法及JPEG区域法进行对比。结果 JPEG-HVS实现的压缩比比JPEG实现的压缩比平均高出53.56%,比JPEG区域法平均高出18.75%。3种压缩方法的峰值信噪比(PSNR)波动不大,JPEG的PSNR值最大,JPEG-HVS次之,平均结构相似度(MSSIM)从大到小排列依次为JPEGJPEG-HVSJPEG区域法。JPEG-HVS编解码所需时间要明显少于JPEG。同时依靠主观评价可以发现,经JPEG-HVS解压的重构图像仍具有良好的视觉特性。结论在保证了压缩质量的同时,提出的JPEG-HVS压缩算法相比于传统JPEG压缩算法、JPEG区域法,可以实现更大的压缩比和更快的编解码速度,更有利于图像的存储与传输。  相似文献   

3.
A novel approach for lossless as well as lossy compression of monochrome images using Boolean minimization is proposed. The image is split into bit planes. Each bit plane is divided into windows or blocks of variable size. Each block is transformed into a Boolean switching function in cubical form, treating the pixel values as output of the function. Compression is performed by minimizing these switching functions using ESPRESSO, a cube based two level function minimizer. The minimized cubes are encoded using a code set which satisfies the prefix property. Our technique of lossless compression involves linear prediction as a preprocessing step and has compression ratio comparable to that of JPEG lossless compression technique. Our lossy compression technique involves reducing the number of bit planes as a preprocessing step which incurs minimal loss in the information of the image. The bit planes that remain after preprocessing are compressed using our lossless compression technique based on Boolean minimization. Qualitatively one cannot visually distinguish between the original image and the lossy image and the value of mean square error is kept low. For mean square error value close to that of JPEG lossy compression technique, our method gives better compression ratio. The compression scheme is relatively slower while the decompression time is comparable to that of JPEG.  相似文献   

4.
As one of the most popular digital image manipulations, contrast enhancement (CE) is frequently applied to improve the visual quality of the forged images and conceal traces of forgery, therefore it can provide evidence of tampering when verifying the authenticity of digital images. Contrast enhancement forensics techniques have always drawn significant attention for image forensics community, although most approaches have obtained effective detection results, existing CE forensic methods exhibit poor performance when detecting enhanced images stored in the JPEG format. The detection of forgery on contrast adjustments in the presence of JPEG post processing is still a challenging task. In this paper, we propose a new CE forensic method based on convolutional neural network (CNN), which is robust to JPEG compression. The proposed network relies on a Xception-based CNN with two preprocessing strategies. Firstly,unlike the conventional CNNs which accepts the original image as its input, we feed the CNN with the gray-level co-occurrence matrix (GLCM) of image which contains CE fingerprints, then the constrained convolutional layer is used to extract high-frequency details in GLCMs under JPEG compression, finally the output of the constrained convolutional layer becomes the input of Xception to extract multiple features for further classification. Experimental results show that the proposed detector achieves the best performance for CE forensics under JPEG post-processing compared with the existing methods.  相似文献   

5.
The advancement in medical imaging systems such as computed tomography (CT), magnetic resonance imaging (MRI), positron emitted tomography (PET), and computed radiography (CR) produces huge amount of volumetric images about various anatomical structure of human body. There exists a need for lossless compression of these images for storage and communication purposes. The major issue in medical image is the sequence of operations to be performed for compression and decompression should not degrade the original quality of the image, it should be compressed loss lessly. In this article, we proposed a lossless method of volumetric medical image compression and decompression using adaptive block‐based encoding technique. The algorithm is tested for different sets of CT color images using Matlab. The Digital Imaging and Communications in Medicine (DICOM) images are compressed using the proposed algorithm and stored as DICOM formatted images. The inverse process of adaptive block‐based algorithm is used to reconstruct the original image information loss lessly from the compressed DICOM files. We present the simulation results for large set of human color CT images to produce a comparative analysis of the proposed methodology with block‐based compression, and JPEG2000 lossless image compression technique. This article finally proves the proposed methodology gives better compression ratio than block‐based coding and computationally better than JPEG 2000 coding. © 2013 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 23, 227–234, 2013  相似文献   

6.
The lossy nature of the JPEG compression leaves traces which are utilized by the forensic agents to identify the local tampering in the image. In this paper, a tricky anti-forensic method has been proposed to remove the traces left by the JPEG compression in both the spatial domain and discrete cosine transform domain. A novel Least Cuckoo Search algorithm is devised in the proposed anti-forensic compression scheme. Moreover, a new fitness function called histogram deviation is formulated in the optimization algorithm. The experimentation of the proposed anti-forensic compression scheme is performed over uncompressed images from UCID database. The performance of the proposed method is evaluated, and it is compared with the existing methods using PSNR, MSE and classification accuracy as measures. The experimentation ensued with promising results, i.e. accuracy of 0.97, PSNR of 44.34?dB, and MSE of 0.1789 which prove the efficacy of the proposed method.  相似文献   

7.
Image compression technique is used to reduce the number of bits required in representing image, which helps to reduce the storage space and transmission cost. Image compression techniques are widely used in many applications especially, medical field. Large amount of medical image sequences are available in various hospitals and medical organizations. Large images can be compressed into smaller size images, so that the memory occupation of the image is considerably reduced. Image compression techniques are used to reduce the number of pixels in the input image, which is also used to reduce the broadcast and transmission cost in efficient form. This is capable by compressing different types of medical images giving better compression ratio (CR), low mean square error (MSE), bits per pixel (BPP), high peak signal to noise ratio (PSNR), input image memory size and size of the compressed image, minimum memory requirement and computational time. The pixels and the other contents of the images are less variant during the compression process. This work outlines the different compression methods such as Huffman, fractal, neural network back propagation (NNBP) and neural network radial basis function (NNRBF) applied to medical images such as MR and CT images. Experimental results show that the NNRBF technique achieves a higher CR, BPP and PSNR, with less MSE on CT and MR images when compared with Huffman, fractal and NNBP techniques.  相似文献   

8.
We present a new image compression method for very-high-quality lossy compression. This method caters for image data in regimes of (a) detector imperfections, which motivates a robust approach based on the median transform; and (b) noise, which is explicitly sought and separated out, since noise is inherently noncompressible. An in-depth assessment is carried out on real data, relative to the standard JPEG compression method. Comparable visual quality is based on 260:1 compression with the new method, and 40:1 compression with JPEG. The assessment procedure, based on the astronomical images used, is an objective approach for determining very-high-quality visual reconstructions. © 1998 John Wiley & Sons, Inc. Int J Imaging Syst Technol, 9, 38–45, 1998  相似文献   

9.
Many classes of images contain spatial regions which are more important than other regions. Compression methods capable of delivering higher reconstruction quality for important parts are attractive in this situation. For medical images, only a small portion of the image might be diagnostically useful, but the cost of a wrong interpretation is high. Hence, Region Based Coding (RBC) technique is significant for medical image compression and transmission. Lossless compression schemes with secure transmission play a key role in telemedicine applications that help in accurate diagnosis and research. In this paper, we propose lossless scalable RBC for Digital Imaging and Communications in Medicine (DICOM) images based on Integer Wavelet Transform (IWT) and with distortion limiting compression technique for other regions in image. The main objective of this work is to reject the noisy background and reconstruct the image portions losslessly. The compressed image can be accessed and sent over telemedicine network using personal digital assistance (PDA) like mobile.  相似文献   

10.
The basic goal of image compression through vector quantization (VQ) is to reduce the bit rate for transmission or data storage while maintaining an acceptable fidelity or image quality. The advantage of VQ image compression is its fast decompression by table lookup technique. However, the codebook supplied in advance may not handle the changing image statistics very well. The need for online codebook generation became apparent. The competitive learning neural network design has been used for vector quantization. However, its training time can be very long, and the number of output nodes is somewhat arbitrarily decided before the training starts. Our modified approach presents a fast codebook generation procedure by searching for an optimal number of output nodes evolutively. The results on two medical images show that this new approach reduces the training time considerably and still maintains good quality for recovered images. © 1997 John Wiley & Sons, Inc. Int J Imaging Syst Technol, 8, 413–418, 1997  相似文献   

11.
Shahnaz R  Walkup JF  Krile TF 《Applied optics》1999,38(26):5560-5567
The performance of an image compression scheme is affected by the presence of noise, and the achievable compression may be reduced significantly. We investigated the effects of specific signal-dependent-noise (SDN) sources, such as film-grain and speckle noise, on image compression, using JPEG (Joint Photographic Experts Group) standard image compression. For the improvement of compression ratios noisy images are preprocessed for noise suppression before compression is applied. Two approaches are employed for noise suppression. In one approach an estimator designed specifically for the SDN model is used. In an alternate approach, the noise is first transformed into signal-independent noise (SIN) and then an estimator designed for SIN is employed. The performances of these two schemes are compared. The compression results achieved for noiseless, noisy, and restored images are also presented.  相似文献   

12.
Face recognition on mobile devices, such as personal digital assistants and cell phones, is a big challenge owing to the limited computational resources available to run verifications on the devices themselves. One approach is to transmit the captured face images by use of the cell-phone connection and to run the verification on a remote station. However, owing to limitations in communication bandwidth, it may be necessary to transmit a compressed version of the image. We propose using the image compression standard JPEG2000, which is a wavelet-based compression engine used to compress the face images to low bit rates suitable for transmission over low-bandwidth communication channels. At the receiver end, the face images are reconstructed with a JPEG2000 decoder and are fed into the verification engine. We explore how advanced correlation filters, such as the minimum average correlation energy filter [Appl. Opt. 26, 3633 (1987)] and its variants, perform by using face images captured under different illumination conditions and encoded with different bit rates under the JPEG2000 wavelet-encoding standard. We evaluate the performance of these filters by using illumination variations from the Carnegie Mellon University's Pose, Illumination, and Expression (PIE) face database. We also demonstrate the tolerance of these filters to noisy versions of images with illumination variations.  相似文献   

13.
包装印刷中JPEG2000标准实现框架的研究   总被引:2,自引:1,他引:1  
介绍了最新的静止图像压缩标准JPEG2000的优点,设计了图像压缩系统在包装印刷领域内应用的实现框架,从而为图像的存储和传输提供一可借鉴的途径.  相似文献   

14.
李定川 《影像技术》2010,22(4):26-31
JPEG2000是为适应不断发展的图像压缩应用而出现的新的静止图像压缩标准。阐述了JPEG2000图像编码系统的实现过程,对其中采用的基本算法和关键技术进行了描述,介绍了这一新标准的特点及应用场合,并对其性能进行了分析。  相似文献   

15.
《成像科学杂志》2013,61(3):320-333
Abstract

This paper proposes a new colour image retrieval scheme using Z-scanning technique for content-based image retrieval (CBIR). In recent years, the CBIR is a popular research topic for image retrieval. This paper proposes a scheme which employs the Z-scanning technique to extract directional intensity features for measuring the similarity between query and database images. In the multiple channel images, each colour channel can be processed individually or combined into a grey channel Y. In order to extract the features by Z-scanning technique from all images, each channel of all images must be divided into several N×N blocks. In each block, F pairs of pixels are scanned by a ‘Z’ direction to obtain the texture features. Each colour channel can be obtained an M×M Z-scanning co-occurrence matrix (ZSCM) for storing the probability of each relationship of all closest blocks. At the similarity measure stage, the ZSCMs of query image and database images are compared to measure their similarity. The experimental results show that the proposed scheme is beneficial for image retrieval when the images include the same texture or object. On the other hand, the proposed scheme also can get better retrieval results and more efficiency than colour correlogram (CC) technique for colour texture images. Another technique uses motif co-occurrence matrix (MCM) as the feature in similarity measurement. The experimental results show the proposed ZSCM can get better retrieval results and higher recall and precision values than the CC and MCM techniques for public image databases.  相似文献   

16.
根据BP神经网络图像压缩处理中,存在对图像信息高低频部分处理质量不同和边缘效应等问题,提出了采用JPEG基线算法于BP神经网络图像压缩处理结构中,建立了该系统。并采用灰阶Lena图像进行实验,通过实验分析发现,采用这种新的结构来处理图像,不仅可以得到较大的压缩比,而且具有较好的峰值信噪比。实验结果证明这种具有自适应性的图像处理方法,不仅可行,而且能高效、稳定地重建图像。  相似文献   

17.
本文修改了 JPEG2000 算法,通过改变编码顺序节约了大量的计算,提高了运行速度.用此算法对空间太阳望远镜的目标图像进行了实验,结果表明在相同压缩比的情况下,修改后的算法对图像质量没有明显的影响,在运行时间和压缩质量方面都能够满足空间太阳望远镜的压缩任务要求.  相似文献   

18.
《成像科学杂志》2013,61(4):212-224
Abstract

The lossless compression of images is widely used in medical imaging and remote sensing applications. Also, progressive transmission of images is often desirable because it can reduce the transmission bits of an image. Therefore, combining the features of lossless compression and progressive transmission of images has been intensely researched. The bitplane method (BPM) is the simplest way to implement a lossless progressive image transmission system. In the present paper, a new block-based scheme for lossless progressive image transmission is proposed. This scheme will reduce the transmission load and improve the image quality of BPM. This method first performs a quantization operation upon the blocks of an image. Next, these blocks are encoded with fewer bits, and the bits are then transmitted phase by phase. The experimental results show that the image quality of this method is better than those in the BPM and improved BPM in related traditional works under the same transmission load. Moreover, during the first phase, the difference in peak signal-to-noise ratio between the present method and BPM is exactly equal up to 8.85 dB. This method is therefore effective for lossless progressive image transmission.  相似文献   

19.
《成像科学杂志》2013,61(2):219-231
Abstract

In this paper, a new fractal image compression algorithm is proposed, in which the time of the encoding process is considerably reduced. The algorithm exploits a domain pool reduction approach, along with the use of innovative predefined values for contrast scaling factor, S, instead of searching it across. Only the domain blocks with entropy greater than a threshold are considered to belong to the domain pool. The algorithm has been tested for some well-known images and the results have been compared with the state-of-the-art algorithms. The experiments show that our proposed algorithm has considerably lower encoding time than the other algorithms giving approximately the same quality for the encoded images.  相似文献   

20.
包装印刷中JPEG2000标准实现的研究   总被引:3,自引:3,他引:0  
和克智  刘奇龙  赵鸿雁 《包装工程》2006,27(1):79-80,83
对包装印刷装潢图像用Visual C 实现了最新图像压缩标准JPEG2000对其进行压缩,并得出了软件实现功能和对装潢图像的压缩、存储和传输结论,对装潢图像的压缩保存和传输具有一定的实际意义.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号