首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 24 毫秒
1.
On optimal analysis/synthesis filters for coding gain maximization   总被引:1,自引:0,他引:1  
We consider the use of pre and postfilters in conjunction with M-channel, uniform-band paraunitary (orthonormal) filter banks. We show that given any orthonormal filter bank, the pre and postfilters that maximize the coding gain are determined entirely by the power spectrum of the input process regardless of the details of the orthonormal filter blank (which could be FIR, IIR, or even the ideal brickwall filter bank). The optimized coding gain, however, depends on the prefilter as well as the sandwiched orthonormal filter bank. The coding gain improvement due to pre and postfiltering is often significant as we demonstrate with numerical examples and comparisons. The validity of our results depends strongly on the orthonormality property of the filter bank in between the pre and postfilters. In the nonorthonormal case, most of these results are not true, as is demonstrated  相似文献   

2.
It is shown that postfiltering circuits based on higher order LPC (linear predictive coding) models can provide very low distortion in terms of special tilt. Thus, they can provide better speech enhancement than circuits based on the backward-adaptive pole-zero predictor in ADPCM (adaptive digital pulse code modulation). Quantitative criteria for designing postfiltering circuits based on higher-order LPC models are discussed. These postfilters are particularly attractive for systems where high-order LPC analysis is an integral part of the coding algorithm. In a subjective test that used a computer-simulated version of these circuits, enhanced ADPCM obtained a mean opinion score of 3.6 at 16 kb/s  相似文献   

3.
We investigate the design of subband coders without the traditional perfect-reconstruction constraint on the filters. The coder uses scalar quantizers, and its filters and bit allocation are designed to optimize a rate-distortion criterion. Using convexity analysis, we show that optimality can be achieved using filterbanks that are the cascade of a (paraunitary) principal component filterbank for the input spectral process and a set of pre and postfilters surrounding each quantizer. Analytical expressions for the pre and postfilters are then derived. An algorithm for computing the globally optimal filters and bit allocation is given. We also develop closed-form solutions for the special case of two-channel coders under an exponential rate-distortion model. Finally, we investigate a constrained-length version of the filter design problem, which is applicable to practical coding scenarios. While the optimal filterbanks are nearly perfect-reconstruction at high rates, we demonstrate an apparently surprising advantage of optimal FIR filterbanks; they significantly outperform optimal perfect-reconstruction FIR filterbanks at all bit rates  相似文献   

4.
A new predictive coder, based on an estimation method which adapts to line and edge features in images, is described. Quantization of the prediction error is performed by a two-level adaptive scheme: an adaptive transform coder, and threshold coding in both transform and spatial domains. Control information, which determines the behavior of the predictor, is quantized using a simple variable rate technique. The results are improved by pre- and postfiltering using a related noncausal form of the estimator. Acceptable images have been produced in this way at bit rates of less than 0.5 bit/pixel.  相似文献   

5.
An entropy-constrained residual vector quantization design algorithm is used to design codebooks for image coding. Entropy-constrained residual vector quantization has several important advantages. It can outperform entropy-constrained vector quantization in terms of rate-distortion performance, memory, and computation requirements. It can also be used to design vector quantizers with relatively large vector sizes and high output rates. Experimental results indicate that good image reproduction quality can be achieved at relatively low bit rates. For example, a peak signal-to-noise ratio of 30.09 dB is obtained for the 512x512 LENA image at a bit rate of 0.145 b/p.  相似文献   

6.
This paper presents several strategies to improve the performance of very low bit rate speech coders and describes a speech codec that incorporates these strategies and operates at an average bit rate of 1.2 kb/s. The encoding algorithm is based on several improvements in a mixed multiband excitation (MMBE) linear predictive coding (LPC) structure. A switched-predictive vector quantiser technique that outperforms previously reported schemes is adopted to encode the LSF parameters. Spectral and sound specific low rate models are used in order to achieve high quality speech at low rates. An MMBE approach with three sub-bands is employed to encode voiced frames, while fricatives and stops modelling and synthesis techniques are used for unvoiced frames. This strategy is shown to provide good quality synthesised speech, at a bit rate of only 0.4 kb/s for unvoiced frames. To reduce coding noise and improve decoded speech, spectral envelope restoration combined with noise reduction (SERNR) postfilter is used. The contributions of the techniques described in this paper are separately assessed and then combined in the design of a low bit rate codec that is evaluated against the North American Mixed Excitation Linear Prediction (MELP) coder. The performance assessment is carried out in terms of the spectral distortion of LSF quantisation, mean opinion score (MOS), A/B comparison tests and the ITU-T P.862 perceptual evaluation of speech quality (PESQ) standard. Assessment results show that the improved methods for LSF quantisation, sound specific modelling and synthesis and the new postfiltering approach can significantly outperform previously reported techniques. Further results also indicate that a system combining the proposed improvements and operating at 1.2 kb/s, is comparable (slightly outperforming) a MELP coder operating at 2.4 kb/s. For tandem connection situations, the proposed system is clearly superior to the MELP coder.  相似文献   

7.
The transform and hybrid transform/DPCM methods of image coding are generalized to allow pyramid vector quantization of the transform coefficients. An asymptotic mean-squared error performance expression is derived for the pyramid vector quantizer and used to determine the optimum rate assignment for encoding the various transform coefficients. Coding simulations for two images at average rates of 0.5-1 bit/pixel demonstrate a 1-3 dB improvement in signal-to-noise ratio for the vector quantization approach in the hybrid coding, with more modest improvements in signal-to-noise ratio in the transform coding. However, this improvement is quite noticeable in image quality, particularly in reducing "blockiness" in the low bit rate encoded images.  相似文献   

8.
9.
10.
Quantizers for block transform image coding systems are typically designed under the assumption of Gaussian statistics for the transform coefficients. While convincing arguments can be provided in support of this approach, empirical evidence is presented demonstrating that, except possibly for the dc term, wide departures from Gaussian behavior can be expected for real-world imagery at typical block sizes. In this paper we describe the performance of a block cosine image coding system with an adaptive quantizer matched to the statistics of the transform coefficients. The adaptive quantizer is based upon a recently developed algorithm which employs a training sequence in the design procedure. At encoding rates of approximately 1 bit/pixel and above, this approach results in significant improvement in reconstructed image quality compared to fixed quantization schemes designed under the Gaussian assumption. For rates much below 1 bit/pixel the relative improvement is negligible.  相似文献   

11.
Universal trellis coded quantization   总被引:2,自引:0,他引:2  
A new form of trellis coded quantization based on uniform quantization thresholds and "on-the-fly" quantizer training is presented. The universal trellis coded quantization (UTCQ) technique requires neither stored codebooks nor a computationally intense codebook design algorithm. Its performance is comparable with that of fully optimized entropy-constrained trellis coded quantization (ECTCQ) for most encoding rates. The codebook and trellis geometry of UTCQ are symmetric with respect to the trellis superset. This allows sources with a symmetric probability density to be encoded with a single variable-rate code. Rate allocation and quantizer modeling procedures are given for UTCQ which allow access to continuous quantization rates. An image coding application based on adaptive wavelet coefficient subblock classification, arithmetic coding, and UTCQ is presented. The excellent performance of this coder demonstrates the efficacy of UTCQ. We also present a simple scheme to improve the perceptual performance of UTCQ for certain imagery at low bit rates. This scheme has the added advantage of being applied during image decoding, without the need to reencode the original image.  相似文献   

12.
This paper presents a pre/postfiltering framework to reduce the reconstruction errors near block boundaries in wavelet-based image and video compression. Two algorithms are developed to obtain the optimal filter, based on boundary filter bank and polyphase structure, respectively. A low-complexity structure is employed to approximate the optimal solution. Performances of the proposed method in the removal of JPEG 2000 tiling artifact and the jittering artifact of three-dimensional wavelet video coding are reported. Comparisons with other methods demonstrate the advantages of our pre/postfiltering framework.  相似文献   

13.
This paper describes the implementation of the recently introducedcolor set partitioning in hierarchical tree (CSPIHT)-based scheme for video coding. The intra- and interframe coding performance of a CSPIHT-based video coder (CVC) is compared against that of the H.263 at bit rates lower than 64 kbit/s. The CVC performs comparably or better than the H.263 at lower bit rates, whereas the H.263 performs better than the CVC at higher bit rates. We identify areas that hamper the performance of the CVC and propose an improved scheme that yields better performance in image and video coding in low bit-rate environments.  相似文献   

14.
In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.   相似文献   

15.
The authors introduce a novel coding technique which significantly improves the performance of the traditional vector quantisation (VQ) schemes at low bit rates. High interblock correlation in natural images results in a high probability that neighbouring image blocks are mapped to small subsets of the VQ codebook, which contains highly correlated codevectors. If, instead of the whole VQ codebook, a small subset is considered for the purpose of encoding neighbouring blocks, it is possible to improve the performance of traditional VQ schemes significantly. The performance improvement obtained with the new method is about 3 dB on average when compared with traditional VQ schemes at low bit rates. The method provides better performance than the JPEG coding standard at low bit rates, and gives comparable results with much less complexity than address VQ  相似文献   

16.
基于MIMO天线系统的空时编码技术是改善无线通信性能、提高带限系统数据速率的一种理想选择。但是由于正交空时分组码不能保证数据全速率传输,为此提出一种改进的准正交空时分组码设计方案。该编码方法利用准正交准则,能够保证数据以全速率传输。并在此基础上具体讨论和分析了编码的编、译码算法和误码性能。该编码方法既不降低分集增益也不增加译码复杂度,并且可以获得一定的编码增益。仿真结果表明,这种方法的误比特率无论在低信噪比还是在高信噪比条件下都要优于已有的准正交空时分组码——Jafarkhani码。  相似文献   

17.
Subband coding (SBC) with vector quantization (VQ) has been shown to be an effective method for coding images at low bit rates. We split the image spectrum into seven nonuniform subbands. Threshold vector quantization (TVQ) and finite state vector quantization (FSVQ) methods are employed in coding the subband images by exploiting interband and intraband correlations. Our new SBC-FSVQ schemes have the advantages of the subband-VQ scheme while reducing the bit rate and improving the image quality. Experimental results are given and comparisons are made using our new scheme and some other coding techniques. In the experiments, it is found that SBC-FSVQ schemes achieve the best peak signal-to-noise ratio (PSNR) performance when compared to other methods at the same bit rate.  相似文献   

18.
For wavelet-based image coding, it is important to efficiently code the significance map for each bit plane. The proposed algorithm provides an effective method for coding the significance map by using block-based zerotree and quadtree decomposition, and demonstrates good R-D performance especially at low bit rates and for images with rich texture  相似文献   

19.
Weighted universal image compression   总被引:1,自引:0,他引:1  
We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB.  相似文献   

20.
Image compression through embedded multiwavelet transform coding   总被引:17,自引:0,他引:17  
In this paper, multiwavelets are considered in the context of image compression and two orthonormal multiwavelet bases are experimented, each used in connection with its proper prefilter. For evaluating the effectiveness of multiwavelet transform for coding images at low bit-rates, an efficient embedded coding of multiwavelet coefficients has been realized. The performance of this multiwavelet-based coder is compared with the results obtained for scalar wavelets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号