首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 859 毫秒
1.
In this article, an efficient image coding scheme that takes advantages of feature vector in wavelet domain is proposed. First, a multi‐stage discrete wavelet transform is applied on the image. Then, the wavelet feature vectors are extracted from the wavelet‐decomposed subimages by collecting the corresponding wavelet coefficients. And finally, the image is coded into bit‐stream by applying vector quantization (VQ) on the extracted wavelet feature vectors. In the encoder, the wavelet feature vectors are encoded with a codebook where the dimension of codeword is less than that of wavelet feature vector. By this way, the coding system can greatly improve its efficiency. However, to fully reconstruct the image, the received indexes in the decoder are decoded with a codebook where the dimension of codeword is the same as that of wavelet feature vector. Therefore, the quality of reconstructed images can be preserved well. The proposed scheme achieves good compression efficiency by the following three methods. (1) Using the correlation among wavelet coefficients. (2) Placing different emphasis on wavelet coefficients at different decomposing levels. (3) Preserving the most important information of the image by coding the lowest‐pass subimage individually. In our experiments, simulation results show that the proposed scheme outperforms the recent VQ‐based image coding schemes and wavelet‐based image coding techniques, respectively. Moreover, the proposed scheme is also suitable for very low bit rate image coding. © 2005 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 15, 123–130, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20045  相似文献   

2.
Wavelet transform coding (WTC) with vector quantization (VQ) has been shown to be efficient in the application of image compression. An adaptive vector quantization coding scheme with the Gold‐Washing dynamic codebook‐refining mechanism in the wavelet domain, called symmetric wavelet transform‐based adaptive vector quantization (SWT‐GW‐AVQ), is proposed for still‐image coding in this article. The experimental results show that the GW codebook‐refining mechanism working in the wavelet domain rather than the spatial domain is very efficient, and the SWT‐GW‐AVQ coding scheme may improve the peak signal‐to‐noise ratio (PSNR) of the reconstructed images with a lower encoding time. © 2002 Wiley Periodicals, Inc. Int J Imaging Syst Technol 12, 166–174, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10024  相似文献   

3.
近年来,无参考图像质量评价发展迅速,但是对雾天图像质量进行评价的无参考算法还鲜有报道。该文提出了一种基于码书的无参考雾天图像质量评价算法。目的是使该方法评价雾天图像质量的结果与人类主观感知相一致。寻找能反映雾天图像质量的特征,运用这些特征构建码书,然后用码书对训练图像进行编码得到训练图像的特征向量,最后用这些向量与训练图像的主观评分进行回归得到雾天图像质量评价模型。该方法在仿真的雾天图像库中进行了测试,结果表明:Pearson线性相关系数和Spearman等级相关系数值都在0.99以上。并与经典的无参考算法NIQE和CONIA方法进行了比较,优于这些算法,能够很好地预测人对雾天图像的主观感知。  相似文献   

4.
Domingo F  Saloma C 《Applied optics》1999,38(17):3735-3744
We demonstrate an image-compression technique that uses what we believe is a new noniterative codebook generation algorithm for vector quantization. The technique supports rapid decompression and is equally applicable to individual images or to a set of images without the need for interframe processing. Compression with a single-image codebook is tested on (1) ten confocal images of the hindbrain of a mouse embryo, (2) video images of a polystyrene microsphere that is manipulated by a focused laser light, and (3) five fluorescence images of the embryo eye lens taken at different magnifications. The reconstructions are assessed with the normalized mean-squared error and with Linfoot's criteria of fidelity, structural content, and correlation quality. Experimental results with single-image compression show that the technique produces fewer local artifacts than JPEG compression, especially with noisy images. Results with video and confocal image series indicate that single-image codebook generation is sufficient at practical compression ratios for producing acceptable reconstructions for mouse embryo analysis and for viewing optically trapped microspheres. Experiments with the magnified images also reveal that the compression scheme is robust to scaling.  相似文献   

5.
A new secret image transmission scheme suitable for narrow communication channel is proposed in this article. A set of secret images can be simultaneously and efficiently delivered to the receiver via a small and meaningless data stream by the proposed scheme. To reduce the volume of secret images, a codebook is first generated and these secret images are encoded into binary indexes based on the vector quantization (VQ) technique. The compressed message is then embedded into the VQ codebook utilized in the encoding procedure by an adaptive least‐significant‐bits (LSB) modification technique. For the purpose of security, the slightly modified codebook is further encrypted into a meaningless data stream by the AES cryptosystem. Simulation results show that the proposed scheme provides an impressive improvement both in the visual quality of the extracted secret images at the receiver and in the hiding capacity of the cover medium. Experimental data also reveal the feasibility of the proposed secret image transmission scheme for limited‐bandwidth environment. © 2007 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 17, 1–9, 2007  相似文献   

6.
Tsang P  Cheung KW  Poon TC 《Applied optics》2011,50(34):H42-H49
We propose a method for compressing a digital color Fresnel hologram based on vector quantization (VQ). The complex color hologram is first separated into three complex holograms, each representing one of the primary colors. Subsequently, each hologram is converted into what we call a real Fresnel hologram and compressed with VQ based on a universal codebook. Experimental evaluation reveals that our scheme is capable of attaining a compression ratio of over 1600 times and still preserving acceptable visual quality on the reconstructed images. Moreover, the decoding process is free from computation and highly resistant to noise contamination on the compressed data.  相似文献   

7.
Abstract

The nearest neighbor (NN) searching problem has wide applications. In vector quantization (VQ), both the codebook generation phase and encoding phase (using the codebook just generated) often need to use the NN search. Improper design of the searching algorithm will make the complexity quite big as vector dimensionality k or codebook size N increases. In this paper, a fast NN searching method is proposed, which can then accelerate the LBG codebook generation process for VQ design. The method successfully modifies and improves the LAESA method. Unlike LAESA, the proposed k/2 “fixed” points (allocated far from the data) and the origin are used as the k/2+1 reference points to reduce the searching area. The overhead in memory is only linearly proportional to N and k. The time complexity, including the overhead, is of order O(kN). According to our experiments, the proposed algorithm can reduce the time burden while the distortion remains identical to that of the full search.  相似文献   

8.
Abstract

In this paper, a new artifact reduction algorithm for compressed color images using MMRCVQ is proposed. The algorithm extends and modifies vector quantization (VQ) for discovering the relationships between the uncompressed color images and their deblocked compressed versions by classifying the deblocked compressed blocks into several categories using information from their neighboring blocks. The discovered relationships are stored in two codebooks and are used to recover the missing information of compressed color images. To increase the availability of codewords and reduce the memory needed for storing codewords, mean‐removed vectors are used to generate codebooks. The experimental results show that the proposed approach can remove, effectively, the artifacts caused by high compression and improve perceptual quality significantly. Compared to existing methods, the proposed approach usually uses much less computing time to recover a compressed color image and has much better image quality.  相似文献   

9.
One of the major difficulties arising in vector quantization (VQ) is high encoding time complexity. Based on the well‐known partial distance search (PDS) method and a special order of codewords in VQ codebook, two simple and efficient methods are introduced in fast full search vector quantization to reduce encoding time complexity. The exploitation of the “move‐to‐front” method, which may get a smaller distortion as early as possible, combined with the PDS algorithm, is shown to improve the encoding efficiency of the PDS method. Because of the feature of energy compaction in DCT domain, search in DCT domain codebook may be further speeded up. The experimental results show that our fast algorithms may significantly reduce search time of VQ encoding. © 2003 Wiley Periodicals, Inc. Int J Imaging Syst Technol 12, 204–210, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10030  相似文献   

10.
Finite state vector quantization (FSVQ) has been proven to be a high-quality and low-bit rate coding scheme. An FSVQ has achieved the efficiency of a small codebook (the state codebook) VQ while maintaining the quality of a large codebook (the master codebook) VQ. However, the large master codebook becomes a primary limitation of FSVQ when the implementation is carefully taken into account. A large amount of memory would be required in storing the master codebook, and much effort would be spent in maintaining the state codebook if the master codebook became too large. This problem could be partially solved by the mean/residual technique (MRVQ)-that is, the block means and the residual vectors would be separately coded. However, MRVQ has its own drawbacks. Additional bits would be required in coding those means. Moreover, electing the state codebooks in the residual domain would be difficult. A new hybrid coding scheme called the finite state residual vector quantization (FSRVQ) is proposed in this article for the sake of using both advantage in FSVQ and MRVQ. The codewords in FSRVQ were designed by removing the block means to reduce the codebook size. The block means were predicted by the neighboring blocks to reduce the bit rate. In addition, the predicted means were added to the residual vectors so that the state codebooks could be generated entirely. The performance of FSRVQ was indicated from the experimental results to be better than that of both ordinary FSVQ and MRVQ uniformly.©1994 John Wiley & Sons Inc  相似文献   

11.
《成像科学杂志》2013,61(2):195-203
Abstract

In this paper, we propose two reversible information hiding schemes based on side-match vector quantisation (SMVQ). In 2006, Chang et al. proposed a data-hiding scheme based on SMVQ coding. In order to increase both the image quality and the embedding capacity, we improve their method by embedding multiple secret bits into a block and finding out the second selected codeword from the full codebook. In addition, we propose another reversible information hiding scheme whose output is a pure VQ index table. The weighted bipartite graph is used to model our scheme and the matching approach is used to find out the best solution. Compared with Chang et al.’s scheme, we have higher visual quality in the experimental results.  相似文献   

12.
In the recent years, microarray technology gained attention for concurrent monitoring of numerous microarray images. It remains a major challenge to process, store and transmit such huge volumes of microarray images. So, image compression techniques are used in the reduction of number of bits so that it can be stored and the images can be shared easily. Various techniques have been proposed in the past with applications in different domains. The current research paper presents a novel image compression technique i.e., optimized Linde–Buzo–Gray (OLBG) with Lempel Ziv Markov Algorithm (LZMA) coding technique called OLBG-LZMA for compressing microarray images without any loss of quality. LBG model is generally used in designing a local optimal codebook for image compression. Codebook construction is treated as an optimization issue and can be resolved with the help of Grey Wolf Optimization (GWO) algorithm. Once the codebook is constructed by LBG-GWO algorithm, LZMA is employed for the compression of index table and raise its compression efficiency additionally. Experiments were performed on high resolution Tissue Microarray (TMA) image dataset of 50 prostate tissue samples collected from prostate cancer patients. The compression performance of the proposed coding esd compared with recently proposed techniques. The simulation results infer that OLBG-LZMA coding achieved a significant compression performance compared to other techniques.  相似文献   

13.
Abstract

A new scheme that aims to cut down on the computational cost of the vector quantization (VQ) encoding procedure is proposed in this paper. In this scheme, the correlation between the codewords in the codebook is exploited and three test conditions are designed to filter out the impossible codewords in the codebook. The design of test conditions is based on the concept of integral projection.

From the experimental results, it is shown that the new scheme outperforms all the other schemes proposed so far in speeding up the VQ encoding procedure. When the codebook of 1024 codewords is used in the proposed scheme, the execution time it consumes is less than 2 per cent of that needed by the full search algorithm. The average time reduction rate is approximately 97.7 per cent compared to the execution time for the full search algorithm. In other words, the proposed scheme indeed provides an effective approach to speed up the VQ encoding procedure.  相似文献   

14.
This article presents a compression method to encode a 2D‐gel image by using hybrid lossless and lossy techniques. In this method, areas containing protein spots are encoded using lossless method while the background is encoded using the lossy method. A 2D‐gel image usually covers a large portion of the background in which has colors that are close to white. The VQ codebook‐generating approach gives more codewords to describe the background; consequently, the proposed method can nearly precisely depict the background of the 2D‐gel image and exactly record protein spots without any losses. Therefore, it can provide a high compression ratio. Image compressed by this method can nearly be lossless reconstructed. The experimental results show that the compression ratio is significantly improved with acceptable image quality compared to the JPEG‐LS method. © 2006 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 16, 1–8, 2006  相似文献   

15.
Abstract

The digital property of multimedia has received significant attention in recent years. Robust watermarking of digital images and video for copyright protection is an important and challenging topic. In this paper, a watermarking method is presented which is based on vector quantization (VQ). In the proposed method, the watermark is related to the codebook that is permuted by an owner-specific random sequence. Also, the extraction of the watermark does not require the existence of the original image. The proposed method exploits the relation of VQ indices to provide the property of invisibility and robustness for various attacks. The experimental results in this work demonstrate the effectiveness of the proposed method.  相似文献   

16.
Lossy image compression techniques allow arbitrarily high compression rates but at the price of poor image quality. We applied maximum likelihood difference scaling to evaluate image quality of nine images, each compressed via vector quantization to ten different levels, within two different color spaces, RGB and CIE 1976 L*a*b*. In L*a*b* space, images could be compressed on average by 32% more than in RGB space, with little additional loss in quality. Further compression led to marked perceptual changes. Our approach permits a rapid, direct measurement of the consequences of image compression for human observers.  相似文献   

17.
An embedded image coder based on wavelet transform coding and multistage vector quantization (VQ) is proposed in this research. We have examined several critical issues to make the proposed algorithm practically applicable. They include the complexity of embedded VQ, design of the successive vector quantizer, significance evaluation of a vector symbol, and integration of wavelet transform coding and vector quantization. It is shown in experiments that the new method achieves a superior rate-distortion performance. © 1997 John Wiley & Sons, Inc. Int J Imaging Syst Technol, 8, 444–449, 1997  相似文献   

18.
The problem of joint source-channel and multiuser decoding for code division multiple access channels is considered. The block source-channel encoder is defined by a vector quantiser (VQ). The jointly optimum solution to such a problem has been considered before, but its extremely high complexity makes it impractical for systems with medium to large number of users and/or medium to large size of VQ codebook. Instead, the optimum linear decoder with a much lower complexity that minimises the mean-squared error is introduced. The optimum linear decoder is soft in the sense that it utilises all the soft information available at the receiver. Analytical and simulation results show that at low channel signal-to-noise ratio region, the proposed decoder's performance is almost the same as that of the jointly optimum decoder and significantly better than that of the tandem approaches that use separate multiuser detection and table-lookup decoding.  相似文献   

19.
为克服快速分形图像编码带来的解码图像质量下降问题,提出了一种神经网络与方差混合编码的快速分形图像编码算法.该算法结合图像子块复杂度与方差值的对应关系,根据每个区块的方差值大小选择适当的映射编码方法,即对于方差值相对小的区块采用方差编码以提高编码速度,对于方差值相对大的区块采用神经网络编码以提高编码质量.该算法可以较好地修正传统分形编码中由于自仿射映射结构限制所带来的解码质量偏低的问题,在大幅提高编码速度的同时,很好地保持了图像的编码质量.实验结果表明,该算法对比基本分形编码算法可以加速24倍,解码图像的质量对比方差快速分形编码算法有1.1dB的提高.同时,该算法的硬件实现比较容易,非常贴近实用化.  相似文献   

20.
With the massive growth of images data and the rise of cloud computing that can provide cheap storage space and convenient access, more and more users store data in cloud server. However, how to quickly query the expected data with privacy-preserving is still a challenging in the encryption image data retrieval. Towards this goal, this paper proposes a ciphertext image retrieval method based on SimHash in cloud computing. Firstly, we extract local feature of images, and then cluster the features by K-means. Based on it, the visual word codebook is introduced to represent feature information of images, which hashes the codebook to the corresponding fingerprint. Finally, the image feature vector is generated by SimHash searchable encryption feature algorithm for similarity retrieval. Extensive experiments on two public datasets validate the effectiveness of our method. Besides, the proposed method outperforms one popular searchable encryption, and the results are competitive to the state-of-the-art.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号