首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In general, growth algorithms for optimal tree-structured vector quantizers do not exist. In this paper we show that if the source satisfies certain conditions; namely, that of diminishing marginal returns; optimal growth algorithms do exist. We present such an algorithm and compare its performance with that of other tree growth algorithms. Even for sources that do not meet the necessary conditions for the growth algorithm to be optimal, such as for speech with unknown statistics, it is seen by simulation that the algorithm outperforms other known growth algorithms, For sources that do not satisfy the required conditions, the algorithm presented here can also be used to grow the initial tree for the pruning process. The performance of such pruned trees is superior to that of trees pruned from full trees of the same rate  相似文献   

2.
This paper extends Bennett's (1948) integral from scalar to vector quantizers, giving a simple formula that expresses the rth-power distortion of a many-point vector quantizer in terms of the number of points, point density function, inertial profile, and the distribution of the source. The inertial profile specifies the normalized moment of inertia of quantization cells as a function of location. The extension is formulated in terms of a sequence of quantizers whose point density and inertial profile approach known functions as the number of points increase. Precise conditions are given for the convergence of distortion (suitably normalized) to Bennett's integral. Previous extensions did not include the inertial profile and, consequently, provided only bounds or applied only to quantizers with congruent cells, such as lattice and optimal quantizers. The new version of Bennett's integral provides a framework for the analysis of suboptimal structured vector quantizers. It is shown how the loss in performance of such quantizers, relative to optimal unstructured ones, can be decomposed into point density and cell shape losses. As examples, these losses are computed for product quantizers and used to gain further understanding of the performance of scalar quantizers applied to stationary, memoryless sources and of transform codes applied to Gaussian sources with memory. It is shown that the short-coming of such quantizers is that they must compromise between point density and cell shapes  相似文献   

3.
On the structure of vector quantizers   总被引:1,自引:0,他引:1  
Vector quantization is intrinsically superior to predictive coding, transform coding, and other suboptimal and {em ad hoc} procedures since it achieves optimal rate distortion performance subject only to a constraint on memory or block length of the observable signal segment being encoded. The key limitation of existing techniques is the very large randomly generated code books which must be stored, and the computational complexity of the associated encoding procedures. The quantization operation is decomposed into its rudimentary structural components. This leads to a simple and elegant approach to derive analytical properties of optimal quantizers. Some useful properties of quantizers and algorithmic approaches are given, which are relevant to the complexity of both storage and processing in the encoding operation. Highly disordered quantizers, which have been designed using a clustering algorithm, are considered. Finally, lattice quantizers are examined which circumvent the need for a code book by using a highly structured code based on lattices. The code vectors are algorithmically generated in a simple manner rather than stored in a code book, and fast algorithms perform the encoding algorithm with negligible complexity.  相似文献   

4.
5.
New lattice vector quantizer design procedures for nonuniform sources that yield excellent performance while retaining the structure required for fast quantization are described. Analytical methods for truncating and scaling lattices to be used in vector quantization are given, and an analytical technique for piecewise-linear multidimensional companding is presented. The uniform and piecewise-uniform lattice vector quantizers are then used to quantize the discrete cosine transform coefficients of images, and their objective and subjective performance and complexity are contrasted with other lattice vector quantizers and with LBG training-mode designs.  相似文献   

6.
The asymptotic performance of variance-mismatched vector quantizers is presented. It is demonstrated by both asymptotic analysis and computer simulations that well-designed vector quantizers are inherently more invulnerable to mismatch than are conventional scalar quantizers. A generalized exponential density function is considered as a statistical model of sources. As an example, the asymptotic performance is derived and applied to the memoryless Laplacian source with the squared-error distortion measure.  相似文献   

7.
Side match and overlap match vector quantizers for images   总被引:6,自引:0,他引:6  
A class of vector quantizers with memory that are known as finite state vector quantizers (FSVQs) in the image coding framework is investigated. Two FSVQ designs, namely side match vector quantizers (SMVQs) and overlap match vector quantizers (OMVQs), are introduced. These designs take advantage of the 2-D spatial contiguity of pixel vectors as well as the high spatial correlation of pixels in typical gray-level images. SMVQ and OMVQ try to minimize the granular noise that causes visible pixel block boundaries in ordinary VQ. For 512 by 512 gray-level images, SMVQ and OMVQ can achieve communication quality reproduction at an average of 1/2 b/pixel per image frame, and acceptable quality reproduction. Because block boundaries are less visible, the perceived improvement in quality over ordinary VQ is even greater. Owing to the structure of SMVQ and OMVQ, simple variable length noiseless codes can achieve as much as 60% bit rate reduction over fixed-length noiseless codes.  相似文献   

8.
Gersho's (1979) bounds on the asymptotic performance of vector quantizers are valid for vector distortions which are powers of the Euclidean norm. Yamada, Tazaki, and Gray (1980) generalized the results to distortion measures that are increasing functions of the norm of their argument. In both cases, the distortion is uniquely determined by the vector quantization error, i.e., the Euclidean difference between the original vector and the codeword into which it is quantized. We generalize these asymptotic bounds to input-weighted quadratic distortion measures and measures that are approximately output-weighted-quadratic when the distortion is small, a class of distortion measures often claimed to be perceptually meaningful. An approximation of the asymptotic distortion based on Gersho's conjecture is derived as well. We also consider the problem of source mismatch, where the quantizer is designed using a probability density different from the true source density. The resulting asymptotic performance in terms of distortion increase in decibels is shown to be linear in the relative entropy between the true and estimated probability densities  相似文献   

9.
In this paper, we propose gradient match fractal vector quantizers (GMFVQs) and side match fractal vector quantizers (SMFVQs), which are two classes of finite state fractal vector quantizers (FSFVQs), for the image coding framework. In our previous work, we proposed the noniterative fractal block coding (FBC) technique to improve the decoding speed and the coding performance for conventional FBC techniques. To reduce the number of bits for denoting the fractal code of the range block, the concepts of the gradient match vector quantizers (GMVQs) and the side match vector quantizers (SMVQs) are employed to the noniterative FBC technique. Unlike ordinary vector quantizers, the super codebooks in the proposed GMFVQs and SMFVQs are generated from the affine-transformed domain blocks in the noniterative FBC technique. The codewords in the state codebook are dynamically extracted from the super codebook with the side-match and gradient-match criteria. The redundancy in the affine-transformed domain blocks is greatly reduced and the compression ratio can be significantly increased. Our simulation results show that 15%-20% of the bit rates in the noniterative FBC technique are saved by using the proposed GMFVQs.  相似文献   

10.
Two results are presented on vector quantizers meeting necessary conditions for optimality. First a simple generalization of well-known centroid and moment properties of the squared-error distortion measure to a weighted quadratic distortion measure with an input dependent weighting is presented. The second result is an application of the squared-error special case of the first result to a simulation study of the design of1bit per sample two- and three-dimensional quantizers for a memoryless Gaussian source using the generalized Lloyd technique. The existence of multiple distinct local optima is demonstrated, thereby showing that sufficient conditions for unique local optima do not exist for this simple common case. It is also shown that at least three dimensions are required for a vector quantizer to outperform a scalar quantizer for this source.  相似文献   

11.
Multistage trellis-coded vector quantization (MS-TCVQ) is developed as a constrained trellis source-coding technique. The performance of the two-stage TCVQ is studied for Gaussian sources. Issues of stage-by-stage design, output alphabet selection, and complexity are addressed with emphasis on selecting and partitioning the stage codebooks. For a given rate, MS-TCVQ achieves low encoding and storage complexity compared to TCVQ, and comparisons with same-dimensional multistage vector quantization indicate a 0.5-3-dB improvement in signal-to-quantization-noise ratio  相似文献   

12.
Upper and lower bounds are presented for the distortion of the optimal N-point vector quantizer applied to k-dimensional signals. Under certain smoothness conditions on the source distribution, the bounds are shown to hold for each and every value of N, the codebook size. These results extend bounds derived in the high-resolution limit, which assume that the number of code vectors is arbitrarily large. Two approaches to the upper bound are presented. The first, constructive construction, achieves the correct asymptotic rate of convergence as well as the correct dependence on the source density, although leading to an inferior value for the constant. The second construction, based on a random coding argument, is shown to additionally achieve a value of the constant which is much closer to the best known result derived within the asymptotic theory. Lower bound results derived in the correspondence are again shown to possess the correct asymptotic form and yield a constant which is almost indistinguishable from the best value achieved in the asymptotic regime. Finally, application of the results to the problem of source coding yields upper bounds on the distortion rate function for a wide class of processes  相似文献   

13.
Several methods for evaluation of the complexity of data compression systems and for including complexity measures in the traditional rate-distortion analysis have been published in recent works. In this work, we indicate that the relationship between rate-distortion performance and complexity for some practical coding schemes—entropy-constrained vector quantization (ECVQ) and interpolative vector quantization (IVQ)—can be represented by affine models. For the same rate-distortion performance, the complexity of an interpolative vector quantizer is known to be significantly smaller than the complexity of a full-search entropy constrained vector quantizer, and this complexity difference is a suitable illustration for the rate-distortion-complexity framework. We use high-resolution theory arguments to derive the affine models for ECVQ and IVQ. The proposed affine complexity modeling successfully predicts the cost of vector quantizers designed from data sets that were not used to generate the models.  相似文献   

14.
We study the design of channel-optimized vector quantizers in the presence of channel mismatch. We show that when the statistics of the channel bit-error rate (BER) are not known, a minimax solution is the one obtained by designing for the worst possible channel. Then, we consider the case when the probability density function of channel BER is known and propose an algorithm that provides a minimum average distortion. Also, by using an estimate of the channel BER at the decoder, we develop a decoder-adaptive scheme that further improves the performance. In all cases, we have limited ourselves to table-lookup decoders, which amount to very small computational complexities. Finally, the utilization of lookup tables at the encoder and the effects of imperfect estimation of channel BERs are considered  相似文献   

15.
16.
The Lagrangian formulation of variable-rate vector quantization is known to yield useful necessary conditions for quantizer optimality and generalized Lloyd algorithms for quantizer design. The Lagrangian formulation is demonstrated to provide a convenient framework for analyzing the empirical design of variable-rate vector quantizers. In particular, the consistency of empirical design based on minimizing the Lagrangian performance over a stationary and ergodic training sequence is shown for sources with finite second moment. The finite sample performance is also studied for independent training data and sources with bounded support  相似文献   

17.
We present a practical video coding algorithm for use at very low bit rates. For efficient coding at very low bit rates, it is important to intelligently allocate bits within a frame, and so a powerful variable-rate algorithm is required. We use vector quantization to encode the motion-compensated residue signal in an H.263-like framework. For a given complexity, it is well understood that structured vector quantizers perform better than unstructured and unconstrained vector quantizers. A combination of structured vector quantizers is used in our work to encode the video sequences. The proposed codec is a multistage residual vector quantizer, with transform vector quantizers in the initial stages. The transform-VQ captures the low-frequency information, using only a small portion of the bit budget, while the later stage residual VQ captures the high-frequency information, using the remaining bits. We used a strategy to adaptively refine only areas of high activity, using recursive decomposition and selective refinement in the later stages. An entropy constraint was used to modify the codebooks to allow better entropy coding of the indexes. We evaluate the performance of the proposed codec, and compare this data with the performance of the H.263-based codec. Experimental results show that the proposed codec delivered significantly better perceptual quality along with better quantitative performance  相似文献   

18.
《Signal processing》1987,12(2):169-176
It is known from literature how to design optimal quantizers for input signals with a given probability density function. In this paper, optimal quantizers for input signals corrupted by noise are designed. In addition to that it is shown by a separation theorem that the optimal quantizer can be replaced by an estimator for the corrupted input signal followed by an optimal quantizer for the estimate. Furthermore, some numerical results for processes with a Gaussian probability density function are given.  相似文献   

19.
This paper discusses a criterion for testing a vector quantizer (VQ) codebook that is obtained by "training". When a VQ codebook is designed by a clustering algorithm using a training set, "time-average" distortion, which is called the training-set-distortion (TSD), is usually calculated in each iteration of the algorithm, since the input probability function is unknown in general and cumbersome to deal with. The algorithm stops when the TSD ceases to significantly decrease. In order to test the resultant codebook, validating-set-distortion (VSD) is calculated on a separate validating set (VS). Codebooks that yield small difference between the TSD and the VSD are regarded as good ones. However, the difference VSD-TSD is not necessarily a desirable criterion for testing a trained codebook unless certain conditions are satisfied. A condition that is previously assumed to be important is that the VS has to be quite large to well approximate the source distribution. This condition implies greater computational burden of testing a codebook. In this paper, we first discuss the condition under which the difference VSD-TSD is a meaningful codebook testing criterion. Then, convergence properties of the VSD, a time-average quantity, are investigated. Finally we show that for large codebooks, a VS size as small as the size of the codebook is sufficient to evaluate the VSD. This paper consequently presents a simple method to test trained codebooks for VQ's. Experimental results on synthetic data and real images supporting the analysis are also provided and discussed.  相似文献   

20.
A lower bound is proposed for the mean-squared error of ann-dimensional vector quantizer with a large number of output points.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号