首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
On entropy-constrained vector quantization using gaussian mixture models   总被引:2,自引:0,他引:2  
A flexible and low-complexity entropy-constrained vector quantizer (ECVQ) scheme based on Gaussian mixture models (GMMs), lattice quantization, and arithmetic coding is presented. The source is assumed to have a probability density function of a GMM. An input vector is first classified to one of the mixture components, and the Karhunen-Lo`eve transform of the selected mixture component is applied to the vector, followed by quantization using a lattice structured codebook. Finally, the scalar elements of the quantized vector are entropy coded sequentially using a specially designed arithmetic coder. The computational complexity of the proposed scheme is low, and independent of the coding rate in both the encoder and the decoder. Therefore, the proposed scheme serves as a lower complexity alternative to the GMM based ECVQ proposed by Gardner, Subramaniam and Rao [1]. The performance of the proposed scheme is analyzed under a high-rate assumption, and quantified for a given GMM. The practical performance of the scheme was evaluated through simulations on both synthetic and speech line spectral frequency (LSF) vectors. For LSF quantization, the proposed scheme has a comparable performance to [1] at rates relevant for speech coding (20-28 bits per vector) with lower computational complexity.  相似文献   

2.
Mean-shape vector quantizer for ECG signal compression   总被引:1,自引:0,他引:1  
A direct waveform mean-shape vector quantization (MSVQ) is proposed here as an alternative for electrocardiographic (ECG) signal compression. In this method, the mean values for short ECG signal segments are quantized as scalars and compression of the single-lead ECG by average beat substraction and residual differencing their waveshapes coded through a vector quantizer. An entropy encoder is applied to both, mean and vector codes, to further increase compression without degrading the quality of the reconstructed signals. In this paper, the fundamentals of MSVQ are discussed, along with various parameters specifications such as duration of signal segments, the wordlength of the mean-value quantization and the size of the vector codebook. The method is assessed through percent-residual-difference measures on reconstructed signals, whereas its computational complexity is analyzed considering its real-time implementation. As a result, MSVQ has been found to be an efficient compression method, leading to high compression ratios (CRs) while maintaining a low level of waveform distortion and, consequently, preserving the main clinically interesting features of the ECG signals. CRs in excess of 39 have been achieved, yielding low data rates of about 140 bps. This compression factor makes this technique especially attractive in the area of ambulatory monitoring  相似文献   

3.
We propose a new scheme of designing a vector quantizer for image compression. First, a set of codevectors is generated using the self-organizing feature map algorithm. Then, the set of blocks associated with each code vector is modeled by a cubic surface for better perceptual fidelity of the reconstructed images. Mean-removed vectors from a set of training images is used for the construction of a generic codebook. Further, Huffman coding of the indices generated by the encoder and the difference-coded mean values of the blocks are used to achieve better compression ratio. We proposed two indices for quantitative assessment of the psychovisual quality (blocking effect) of the reconstructed image. Our experiments on several training and test images demonstrate that the proposed scheme can produce reconstructed images of good quality while achieving compression at low bit rates. Index Terms-Cubic surface fitting, generic codebook, image compression, self-organizing feature map, vector quantization.  相似文献   

4.
First of all a simple and practical rectangular transform is given,and then thevector quantization technique which is rapidly developing recently is introduced.We combinethe rectangular transform with vector quantization technique for image data compression.Thecombination cuts down the dimensions of vector coding.The size of the codebook can reasonablybe reduced.This method can reduce the computation complexity and pick up the vector codingprocess.Experiments using image processing system show that this method is very effective inthe field of image data compression.  相似文献   

5.
Constrained-storage vector quantization with a universal codebook   总被引:1,自引:0,他引:1  
Many image compression techniques require the quantization of multiple vector sources with significantly different distributions. With vector quantization (VQ), these sources are optimally quantized using separate codebooks, which may collectively require an enormous memory space. Since storage is limited in most applications, a convenient way to gracefully trade between performance and storage is needed. Earlier work addressed this problem by clustering the multiple sources into a small number of source groups, where each group shares a codebook. We propose a new solution based on a size-limited universal codebook that can be viewed as the union of overlapping source codebooks. This framework allows each source codebook to consist of any desired subset of the universal code vectors and provides greater design flexibility which improves the storage-constrained performance. A key feature of this approach is that no two sources need be encoded at the same rate. An additional advantage of the proposed method is its close relation to universal, adaptive, finite-state and classified quantization. Necessary conditions for optimality of the universal codebook and the extracted source codebooks are derived. An iterative design algorithm is introduced to obtain a solution satisfying these conditions. Possible applications of the proposed technique are enumerated, and its effectiveness is illustrated for coding of images using finite-state vector quantization, multistage vector quantization, and tree-structured vector quantization.  相似文献   

6.
The combination of singular value decomposition (SVD) and vector quantization (VQ) is proposed as a compression technique to achieve low bit rate and high quality image coding. Given a codebook consisting of singular vectors, two algorithms, which find the best-fit candidates without involving the complicated SVD computation, are described. Simulation results show that the proposed methods are better than the discrete cosine transform (DCT) in terms of energy compaction, data rate, image quality, and decoding complexity.  相似文献   

7.
田斌  易克初  孙民贵 《电子学报》2000,28(10):12-16
本文提出一种矢量压缩编码新方法—线上投影法.它将输入矢量用它在某条空间直线上的投影近似表示,而用决定这条直线的两个参考点的序号和一个反映该投影点相对于两参考点位置的比例因子作为编码.由于一个大小为N的矢量量化码书中的码字可以确定N(N-1)/2条直线,因此这种方法可用较小的码书获得很高的编码精度.理论分析和实验结果表明:码书大小为N的线上投影法的编码精度与码书大小为N2的矢量量化法相当,并且明显优于用两个大小为N的码书构成的两级矢量量化法,而其码书生成和编码过程的计算复杂度均远远低于后者.它将是矢量信号高精度压缩编码的一种强有力的手段.  相似文献   

8.
This paper evaluates the performance of an image compression system based on wavelet-based subband decomposition and vector quantization. The images are decomposed using wavelet filters into a set of subbands with different resolutions corresponding to different frequency bands. The resulting subbands are vector quantized using the Linde-Buzo-Gray (1980) algorithm and various fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive neural network through an unsupervised learning process. The quality of the multiresolution codebooks designed by these algorithms is measured on the reconstructed images belonging to the training set used for multiresolution codebook design and the reconstructed images from a testing set.  相似文献   

9.
As linearly constrained vector quantization (LCVQ) is efficient for block-based compression of images that require low complexity decompression, it is a “de facto” standard for three-dimensional (3-D) graphics cards that use texture compression. Motivated by the lack of an efficient algorithm for designing LCVQ codebooks, the generalized Lloyd (1982) algorithm (GLA) for vector quantizer (VQ) codebook improvement and codebook design is extended to a new linearly constrained generalized Lloyd algorithm (LCGLA). This LCGLA improves VQ codebooks that are formed as linear combinations of a reduced set of base codewords. As such, it may find application wherever linearly constrained nearest neighbor (NN) techniques are used, that is, in a wide variety of signal compression and pattern recognition applications that require or assume distributions that are locally linearly constrained. In addition, several examples of linearly constrained codebooks that possess desirable properties such as good sphere packing, low-complexity implementation, fine resolution, and guaranteed convergence are presented. Fast NN search algorithms are discussed. A suggested initialization procedure halves iterations to convergence when, to reduce encoding complexity, the encoder considers the improvement of only a single codebook for each block. Experimental results for image compression show that LCGLA iterations significantly improve the PSNR of standard high-quality lossy 6:1 LCVQ compressed images  相似文献   

10.
电子封装常用名称及术语汇集下面,按英文字母顺序,汇集并解释了与目前LSI(包括IC)正在采用的主要封装形式相关联的名称术语等。这些名称术语参考并引用了日本国内12个半导体制造公司,其他国家7个半导体制造公司*与LSI封装相关的资料、日本电子机械工业会...  相似文献   

11.
Using vector quantization for image processing   总被引:1,自引:0,他引:1  
A review is presented of vector quantization, the mapping of pixel intensity vectors into binary vectors indexing a limited number of possible reproductions, which is a popular image compression algorithm. Compression has traditionally been done with little regard for image processing operations that may precede or follow the compression step. Recent work has used vector quantization both to simplify image processing tasks, such as enhancement classification, halftoning, and edge detection, and to reduce the computational complexity by performing the tasks simultaneously with the compression. The fundamental ideas of vector quantization are explained, and vector quantization algorithms that perform image processing are surveyed  相似文献   

12.
An adaptive technique for image sequence coding that is based on vector quantization is described. Each frame in the sequence is first decomposed into a set of vectors. A codebook is generated using the vectors of the first frame as the training sequence, and a label map is created by quantizing the vectors. The vectors of the second frame are then used to generate a new codebook, starting with the first codebook as seeds. The updated codebook is then transmitted. At the same time, the label map is replenished by coding the position and the new values of the labels that have changed from one frame to the other. The process is repeated for subsequent frames. Experimental results for a test sequences demonstrate that the technique can track the changes and maintain a nearly constant distortion over the entire sequence  相似文献   

13.
郑勇  何宁  朱维乐 《信号处理》2001,17(6):498-505
本文基于零树编码、矢量分类和网格编码量化的思想,提出了对小波图像采用空间矢量组合和分类后进行网格编码矢量量化的新方法.该方法充分利用了各高频子带系数频率相关性和空间约束性,依据组合矢量能量和零树矢量综合判定进行分类,整幅图像只需单一量化码书,分类信息占用比特数少.对重要类矢量实行加权网格编码矢量量化,利用卷积编码扩展信号空间以增大量化信号间的欧氏距离,用维特比算法搜索最优量化序列,比使用矢量量化提高了0.6db左右.该方法编码计算复杂度适中,解码简单,可达到很好的压缩效果.  相似文献   

14.
李霞  罗萍  罗雪晖  张基宏 《信号处理》2002,18(5):434-437
本文提出一种用于图像压缩编码的模糊增强学习码书设计算法。该算法是在模糊竞争学习矢量量化的基础上引入增强学习,并用输入训练模式的监督信号与类别模式之间的隶属度控制增强信号。实验结果表明,该算法对初始码本依赖性小,与模糊竞争学习矢量量化和微分竞争学习算法相比,收敛速度更快,性能更好。  相似文献   

15.
Wavelet-based image coding using nonlinear interpolative vectorquantization   总被引:1,自引:0,他引:1  
We propose a reduced complexity wavelet-based image coding technique. Here, 64-D (for three stages of decomposition) vectors are formed by combining appropriate coefficients from the wavelet subimages, 16-D feature vectors are then extracted from the 64-D vectors on which vector quantization (VQ) is performed. At the decoder, 64-D vectors are reconstructed using a nonlinear interpolative technique. The proposed technique has a reduced complexity and has the potential to provide a superior coding performance when the codebook is generated using the training vectors drawn from similar images.  相似文献   

16.
An encoding technique called multilevel block truncation coding that preserves the spatial details in digital images while achieving a reasonable compression ratio is described. An adaptive quantizer-level allocation scheme which minimizes the maximum quantization error in each block and substantially reduces the computational complexity in the allocation of optimal quantization levels is introduced. A 3.2:1 compression can be achieved by the multilevel block truncation coding itself. The truncated, or requantized, data are further compressed in a second pass using combined predictive coding, entropy coding, and vector quantization. The second pass compression can be lossless or lossy. The total compression ratios are about 4.1:1 for lossless second-pass compression, and 6.2:1 for lossy second-pass compression. The subjective results of the coding algorithm are quite satisfactory, with no perceived visual degradation  相似文献   

17.
A two-stage adaptive vector quantization scheme for radiographic image sequence coding is introduced. Each frame in the sequence is first decomposed into a set of vectors, corresponding to nonoverlapping spatially contiguous block of pixels. A codebook is generated using a training set of vectors drawn from the sequence. Each vector is then encoded by the label representing the closest codeword of the codebook, and the label values in a frame label map memory at both ends of the communication channel. The changes occurring in the radiographic image sequences can be categorized into two types: those due to body motion and those due to the injected contrast dye material. In the second scheme proposed, encoding is performed in two stages. In the first stage, the labels of corresponding vectors from consecutive frames are compared and the frame label map memory is replenished (updated). This stage is sufficient to tack the changes caused by patient motions but not due to the injected contrast dye material. The resulting residual error vectors after the first stage coding are calculated for the latter changes and are further encoded by a second codebook, which is updated on a frame-to-frame basis.  相似文献   

18.
王军  张连海  屈丹 《通信技术》2009,42(10):204-206
宽带语音编码中普遍使用导抗谱频率描述声道。利用转换分类差矢量分裂矢量量化方法对导抗谱频率进行量化,该方法基于转换分类矢量量化及差值分裂矢量量化。IsF矢量先按照给出的码书分类,然后每一类中的差矢量再进行分裂矢量量化。实验结果表明,该算法可在每帧编码比特数为37时达到透明量化要求,并且码书存储量明显少于StephenSo等人给出的转换分类分裂矢量量化方法。  相似文献   

19.
A new interband vector quantization of a human vision-based image representation is presented. The feature specific vector quantizer (FVQ) is suited for data compression beyond second-order decorrelation. The scheme is derived from statistical investigations of natural images and the processing principles of biological vision systems, the initial stage of the coding algorithm is a hierarchical, and orientation-selective, analytic bandpass decomposition, realized by even- and odd-symmetric filter pairs that are modeled after the simple cells of the visual cortex. The outputs of each even- and odd-symmetric filter pair are interpreted as real and imaginary parts of an analytic bandpass signal, which is transformed into a local amplitude and a local phase component according to the operation of cortical complex cells. Feature-specific multidimensional vector quantization is realized by combining the amplitude/phase samples of all orientation filters of one resolution layer. The resulting vectors are suited for a classification of the local image features with respect to their intrinsic dimensionality, and enable the exploitation of higher order statistical dependencies between the subbands. This final step is closely related to the operation of cortical hypercomplex or end-stopped cells. The codebook design is based on statistical as well as psychophysical and neurophysiological considerations, and avoids the common shortcomings of perceptually implausible mathematical error criteria. The resulting perceptual quality of compressed images is superior to that obtained with standard vector quantizers of comparable complexity.  相似文献   

20.
为了降低图像特征向量量化的近似表示和高维向量带来的码书训练时间开销,提出了一种投影增强型残差量化方法。在前期的增强型残差量化工作基础上,将主成分分析与增强型残差量化相结合,使得码书训练和特征量化均在低维向量空间进行以提高效率;在低维向量空间上训练码书过程中,提出了联合优化方法,同时考虑投影和量化产生的总体误差,提升码书精度;针对该量化方法,设计了一种特征向量之间的近似欧氏距离快速计算方法用于近似最近邻完全检索。结果表明,相比增强型残差量化,在相同检索精度前提条件下,投影增强型残差量化的只需花费近1/3的训练时间;相比其它同类方法,所提出方法在码书训练时间效率、检索速度和精度上均具有更优的综合性能。该研究为主成分分析同其它量化模型的有效结合提供了参考。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号