首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 51 毫秒
1.
In order to improve the JPEG compression resistant performance of the current steganogrpahy algorithms resisting statistic detection, an adaptive steganography algorithm resisting JPEG compression and detection based on dither modulation is proposed. Utilizing the adaptive dither modulation algorithm based on the quantization tables, the embedding domains resisting JPEG compression for spatial images and JPEG images are determined separately. Then the embedding cost function is constructed by the embedding costs calculation algorithm based on side information. Finally, the RS coding is combined with the STCs to realize the minimum costs messages embedding while improving the correct rates of the extracted messages after JPEG compression. The experimental results demonstrate that the algorithm can be applied to both spatial images and JPEG images. Compared with the current S-UNIWARD steganography, the message extraction error rates of the proposed algorithm after JPEG compression decrease from about 50 % to nearly 0; compared with the current JPEG compression and detection resistant steganography algorithms, the proposed algorithm not only possesses the comparable JPEG compression resistant ability, but also has a stronger detection resistant performance and a higher operation efficiency.  相似文献   

2.

In this paper, we propose a new no-reference image quality assessment for JPEG compressed images. In contrast to the most existing approaches, the proposed method considers the compression processes for assessing the blocking effects in the JPEG compressed images. These images have blocking artifacts in high compression ratio. The quantization of the discrete cosine transform (DCT) coefficients is the main issue in JPEG algorithm to trade-off between image quality and compression ratio. When the compression ratio increases, DCT coefficients will be further decreased via quantization. The coarse quantization causes blocking effect in the compressed image. We propose to use the DCT coefficient values to score image quality in terms of blocking artifacts. An image may have uniform and non-uniform blocks, which are respectively associated with the low and high frequency information. Once an image is compressed using JPEG, inherent non-uniform blocks may become uniform due to quantization, whilst inherent uniform blocks stay uniform. In the proposed method for assessing the quality of an image, firstly, inherent non-uniform blocks are distinguished from inherent uniform blocks by using the sharpness map. If the DCT coefficients of the inherent non-uniform blocks are not significant, it indicates that the original block was quantized. Hence, the DCT coefficients of the inherent non-uniform blocks are used to assess the image quality. Experimental results on various image databases represent that the proposed blockiness metric is well correlated with the subjective metric and outperforms the existing metrics.

  相似文献   

3.
This paper describes a new algorithm for adaptive selection of DCT quantization parameters in the JPEG compressor. The quantization parameters are selected by classification of blocks based on the composition of fine details whose contrast exceeds the threshold visual sensitivity. Fine details are identified by an original search and recognition algorithm in the N-CIELAB normalized color space, which allows us to take visual contrast sensitivity into account. A distortion assessment metric and an optimization criterion for quantization of classified blocks to a high visual quality are proposed. A comparative analysis of test images in terms of compression parameters and quality degradation is presented. The new algorithm is experimentally shown to improve the compression of photorealistic images by 30% on average while preserving their high visual quality.  相似文献   

4.
JPEG图像篡改引入的双重压缩会导致篡改区域的原始压缩特性发生改变,因此可以利用篡改区域压缩特性的不一致性来检测图像的篡改。利用该原理,提出了一种基于量化噪声的JPEG图像篡改检测算法。算法对待检测图像进行分块,计算每块的量化噪声,求取图像块的量化噪声服从均匀分布和高斯分布的概率,从而检测出篡改过的双重压缩区域。实验结果表明:该算法能有效检测双重压缩的JPEG图像篡改,并能定位出篡改区域。  相似文献   

5.
适用于低压缩比环境的高效率图像压缩算法   总被引:1,自引:1,他引:0       下载免费PDF全文
静态图像压缩算法国际标准JPEG目前已广为应用,但其作为通用静态图像压缩算法,实现方法较为复杂,在某些场合并不适用。在深入分析JPEG压缩原理的基础上,通过改进JPEG算法的量化与编码模块,提出了一种基于离散余弦变换(DCT)的简单压缩算法。该算法在低压缩比(压缩比小于15)环境下具有很好的压缩效果,而实现方法却比JPEG简单许多。  相似文献   

6.
一种鲁棒的二值文本图像数字水印算法及仿真   总被引:2,自引:0,他引:2       下载免费PDF全文
针对二值文本图像纹理丰富、信息隐藏量少和取值形式为二值的特点,提出了一种新的文本水印算法。该算法基于DWT,它将小波变换、量化和加密技术有机结合起来,并且在使用该算法提取水印时不需要原始二值文本图像,是一种盲水印算法。文章对加有水印的二值文本图像进行了网络传输中常见的加高斯噪声、JPEG压缩、几何图像剪切等攻击实验,试验表明,该算法有理想的鲁棒性和不可见性。  相似文献   

7.
段新涛  彭涛  李飞飞  王婧娟 《计算机应用》2015,35(11):3198-3202
JPEG图像的双量化效应为JPEG图像的篡改检测提供了重要线索.根据JPEG图像被局部篡改后,又被保存为JPEG格式时,未被篡改的区域(背景区域)的离散余弦变换(DCT)系数会经历双重JPEG压缩,篡改区域的DCT系数则只经历了1次JPEG压缩.而JPEG图像在经过离散余弦变换后其DCT域的交流(AC)系数的分布符合一个用合适的参数来描述的拉普拉斯分布,在此基础上提出了一种JPEG图像重压缩概率模型来描述重压缩前后DCT系数统计特性的变化,并依据贝叶斯准则,利用后验概率表示出图像篡改中存在的双重压缩效应块和只经历单次压缩块的特征值.然后设定阈值,通过阈值进行分类判断就可以实现对篡改区域的自动检测和提取.实验结果表明,该方法能快速并准确地实现篡改区域的自动检测和提取,并且在第2次压缩因子小于第1次压缩因子时,检测结果相对于利用JPEG块效应不一致的图像篡改盲检测算法和利用JPEG图像量化表的图像篡改盲检测算法有了明显的提高.  相似文献   

8.
在JPEG标准中,基于图像压缩的有损压缩算法中的离散余弦变换(DCT),应用于很多图像压缩场合,并且在实际操作中,能获得较高的压缩比,同时压缩后的图像与原始图像的视觉效果基本相同,因此得到了广泛应用。为了达到提高图像质量的目的,文中提出了一个基于二维离散余弦变换(DCT)的图像压缩改进算法,该算法通过设置量化系数来控制图像压缩数组的大小。同时,在图像压缩部分利用DCT快速算法。仿真实验结果表明:该算法进一步提高了图像的峰值信噪比(PSNR)和主观视觉质量。  相似文献   

9.
控制图象灰度失真的高保真压缩算法   总被引:1,自引:0,他引:1       下载免费PDF全文
为实现遥感图象的高保真压缩 ,在借鉴 JPEG- L S近无损压缩思想的基础上 ,提出了 3项改进措施 ,设计与实现了比 JPEG- L S压缩倍数高、图象恢复质量更好的视觉无失真压缩算法——“控制图象灰度失真的高保真压缩算法 (L IGE)”.实验结果表明 ,该算法既可限制图象最大灰度误差 ,又能控制恢复图象的峰值信噪比 ,从而有效地控制图象失真度 ,压缩倍数为 4时 ,数据处理速度与图象恢复质量两方面 ,均优于基于小波变换和嵌入式零树编码的 SPIHT算法 .该研究成果将对发展我国未来的高分辨率卫星、小卫星通信系统、星 -天 -地信息网提供有力的技术支撑 .  相似文献   

10.
This paper presents a novel technique to discover double JPEG compression traces. Existing detectors only operate in a scenario that the image under investigation is explicitly available in JPEG format. Consequently, if quantization information of JPEG files is unknown, their performance dramatically degrades. Our method addresses both forensic scenarios which results in a fresh perceptual detection pipeline. We suggest a dimensionality reduction algorithm to visualize behaviors of a big database including various single and double compressed images. Based on intuitions of visualization, three bottom-up, top-down and combined top-down/bottom-up learning strategies are proposed. Our tool discriminates single compressed images from double counterparts, estimates the first quantization in double compression, and localizes tampered regions in a forgery examination. Extensive experiments on three databases demonstrate results are robust among different quality levels. F 1-measure improvement to the best state-of-the-art approach reaches up to 26.32 %. An implementation of algorithms is available upon request to fellows.  相似文献   

11.
为了进一步改进JPEG算法提高编码的效率和重建图像质量,提出了一种新的变换方法即全相位余弦双正交变换(APCBOT)来替代传统的DCT变换。这种新的变换来源于离散余弦列率滤波(DCSF)的卷积算法,它在把原始图像变换到频率域的同时对各高频分量进行相应的衰减,从而简化了图像变换后的量化步骤。用M atlab进行了数据仿真,结果表明,本文提出的变换和DCT相比有很大的优势,不但量化简单(无量化或仅需一个参数的一致量化),而且使JEPG算法的压缩率和重建图像质量均有可观的优化。  相似文献   

12.
在资源受限的无线多媒体传感器网络(WMSNs)中进行图像编码和传输需要综合考虑能量消耗、压缩率和图像质量三者之间平衡的图像编码方案。对基于离散小波变换的图像编码算法的能耗进行建模分析,提出了一种适用于WMSNs的能量有效的JPEG 2000图像编码算法,根据网络条件和图像质量的限制,使用查找表来选择适当的量子化层级和小波变换层级以减少能量消耗。并采用半可靠的方案进行图像传输,节点根据剩余能量和数据优先级来决定转发或丢弃。仿真实验结果表明:所提出的方法能够在保证所要求图像质量的情况下,有效地降低无线传感器节点的计算和通信能耗。  相似文献   

13.
主要针对JPEG图像合成伪造,提出了一种基于量化失真的合成图像盲检测算法。首先针对合成图像以JPEG和非JPEG不同的存储方式,分别估计原始量化矩阵;然后用估计的原始量化矩阵对合成图像再压缩,计算压缩前后的量化失真;最后通过判断合成图像不同区域量化失真的大小,实现窜改区域的自动检测和定位。实验结果表明:算法能有效地检测JPEG和非JPEG两种不同存储方式的合成图像。  相似文献   

14.
抗JPEG压缩的半脆弱水印算法   总被引:1,自引:0,他引:1  
基于JPEG压缩过程的特性,提出了一种能抵抗JPEG压缩的半脆弱盲检水印算法,并给出了理论论证.该算法利用不重叠图像块的DCT变换系数之间的大小关系,选择图像特征作为水印信息进行嵌入.理论分析和实验结果表明,该算法的水印具有很好的抵抗JPEG压缩的能力,并且对剪切、篡改等恶意攻击有很好的篡改定位能力.  相似文献   

15.
Recently, medical image compression becomes essential to effectively handle large amounts of medical data for storage and communication purposes. Vector quantization (VQ) is a popular image compression technique, and the commonly used VQ model is Linde–Buzo–Gray (LBG) that constructs a local optimal codebook to compress images. The codebook construction was considered as an optimization problem, and a bioinspired algorithm was employed to solve it. This article proposed a VQ codebook construction approach called the L2‐LBG method utilizing the Lion optimization algorithm (LOA) and Lempel Ziv Markov chain Algorithm (LZMA). Once LOA constructed the codebook, LZMA was applied to compress the index table and further increase the compression performance of the LOA. A set of experimentation has been carried out using the benchmark medical images, and a comparative analysis was conducted with Cuckoo Search‐based LBG (CS‐LBG), Firefly‐based LBG (FF‐LBG) and JPEG2000. The compression efficiency of the presented model was validated in terms of compression ratio (CR), compression factor (CF), bit rate, and peak signal to noise ratio (PSNR). The proposed L2‐LBG method obtained a higher CR of 0.3425375 and PSNR value of 52.62459 compared to CS‐LBG, FA‐LBG, and JPEG2000 methods. The experimental values revealed that the L2‐LBG process yielded effective compression performance with a better‐quality reconstructed image.  相似文献   

16.
黄胜  杜呈尘  翦伟 《计算机应用》2015,35(11):3288-3292
针对图像压缩中的死区量化不能有效保留图像边缘信息的问题,提出了低频子带极大值映射量化算法.在图像经过小波变换后所形成的各级子带中,首先利用与低频子带系数呈映射关系的各级高频子带系数的均值确定低频子带中各系数的重要性.在量化过程中,高频子带系数采用JPEG2000中的死区量化步长进行量化,低频子带系数根据自身重要性自动更新量化步长,从而有效保留图像的边缘信息.提出的算法在量化步长更新时对低频系数的选择具有自适应性的优点,与传统的JPEG2000算法相比,所提算法能够加快优化截断的嵌入式分块编码(EBCOT)阶段Tier1的编码速度.实验结果表明,所得图像证明了此算法在保留图像的边缘信息方面具有一些优势,所提算法的峰值信噪比与传统的死区量化相比有约0.2 dB的提升.  相似文献   

17.
Abstract— A compression artifact‐reduction algorithm based on support vector regression is proposed. The algorithm belongs to a broad family of standard reconstruction methods, but a standardization model is determined from a set of training samples of original images and the corresponding noise‐corrupted version. As opposed to artifact‐reduction methods specific to each type of compression artifact (e.g., blocking, ringing, etc.), we treat such artifacts as a manifestation of the same problem, which is the quantization of DCT coefficients. In the testing step, the algorithm tries to undo the effect of quantization by using the relationship between the original and artifact‐corrupted image, determined during the training step. Experimental results exhibit significant reduction in all types of compression artifacts.  相似文献   

18.
In this paper a Human Visual System based adaptive quantization scheme is proposed. The proposed algorithm supports perceptually lossless as well as lossy compression. The algorithm uses a transform based compression approach using the wavelet transform, and has incorporated vision models for the compression of both luminance and chrominance components. The major strength of the coder is the incorporation of the vision model for the chrominance components and the optimum way in which the scales are distributed among the luminance and chrominance components to achieve higher compression ratios. The perceptual model developed for the color components gives flexibility for giving more compression for the color components without causing any color degradations. For each image the visual thresholds are evaluated and an optimum bit allocation is done in such a way that the quantization error is always less than the visual distortion for the given rate. To validate the strength of the proposed algorithm, the perceptual quality of the images reconstructed using the proposed coder is compared with the images reconstructed with JPEG2000 standard coder, for the same compression. To evaluate the perceptual quality of the compressed images latest perceptual quality matrices such as Structural Similarity Index, Visual Information Fidelity and Visual Signal-to-Noise Ratio are used. The results obtained reveal that the proposed structure gives excellent improvement in perceptual quality compared to the existing schemes, for both lossy as well as lossless compression. These advantages make the proposed algorithm a good candidate for replacing the quantizer stage of the current image compression standards.  相似文献   

19.
The authors have solved the problem of detecting the local artificial changes (falsifications) with JPEG compression properties [1]. The known methods for detecting these changes [2–4] describe only the distinctive properties of JPEG compressed images from those without compression. The authors have also developed an algorithm for detecting local embeddings with compression properties on the images and for determining the shifts of embedded JPEG blocks in relation to the embedding coordinates, which are multiples of eight. The relationship between the period of peaks at the spectrum of the histogram of DCT coefficients and the quality factor of the JPEG compression algorithm is found. The paper presents the numerical results on the quality of the true and false embedding detections for the developed algorithm.  相似文献   

20.
目的 传统隐写技术在实际社交网络信道上难以保护秘密信息的完整性。在社交网络中,图像往往经过有损压缩信道进行传输,从而导致隐蔽通信失效。为了保证经过压缩信道传输的载密图像鲁棒性,设计安全鲁棒的隐蔽通信技术具有实际应用价值。基于最小化图像信息损失,本文提出无损载体和鲁棒代价结合的JPEG图像鲁棒隐写。方法 首先,指出构造无损载体能有效维持隐写安全性和鲁棒性的平衡,对经过压缩信道前后的JPEG图像空域像素块进行差分,构造无损载体以确定鲁棒嵌入域;其次,通过对离散余弦变换(discrete cosine transform,DCT)系数进行“±1”操作,并计算空域信息在压缩传输前后的损失,设计衡量DCT系数抗压缩性能的鲁棒代价;同时,验证在低质量因子压缩信道下鲁棒代价更能区分DCT系数的鲁棒能力,最后,利用校验子格编码(syndrome-trellis code,STC),结合无损载体和鲁棒代价对秘密信息进行嵌入。结果 实验在BossBase1.01图像库上进行对比实验,相比于传统JPEG隐写技术,构造无损载体作为嵌入域能有效地将信息平均提取错误率降低24.97%,图像的正确提取成功率提高了21.35%;在此基础上,鲁棒代价进一步将信息平均提取错误率降低1.05%,将图像的正确提取成功率提高16.12%,验证了本文方法显著提高了隐写抗压缩性能。与J-UNIWARD (JPEG universal wavelet relative distortion)、JCRISBE (JPEG compression resistant solution with BCH code)和AutoEncoder (autoencoder and adaptive BCH encoding)3种现有典型隐写方法相比,提出的方法信息平均提取错误率分别降低了95.78%、93.17%和87.38%,图像的正确提取成功率为另外3种隐写方法的86.69倍、30.74倍和4.13倍。图像视觉质量逼近传统隐写方法,并保持较好的抗检测性。结论 本文提出的抗低质量因子JPEG压缩鲁棒隐写方法,获得的中间图像在经过压缩信道后,具有较强的抗压缩性和抗检测性,并保持较高的图像质量。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号