首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
基于部分位平面交替偏移的感兴趣区图像编码   总被引:3,自引:0,他引:3  
张立保  王珂  李光鑫 《光电子.激光》2006,17(3):356-360,367
提出一种基于部分位平面交替的感兴趣区(ROI)图像编码方式-PBAshift。采用4个策略提高编码效率:1)向上偏移最重要ROI位平面至最大背景(BG)位平面之上,保证最重要ROI位平面得到优先编码;2)将最重要的BG位平面与一般重要ROI位平面交替偏移;3)重要性一般的BG位平面与重要性最差的ROI位平面不偏移;4)向下偏移重要性最差的BG位平面。实验表明,PBAShift不仅支持元需ROI形状信息的单ROI编码,而且支持不同兴趣度下的多ROI编码,对今后医药与遥感图像压缩具有重要意义。  相似文献   

2.
JPEG2000图像的编、解码是逐个位平面进行的,因此码流中任何的数据丢失都会影响最后的位平面及对应的小波系数.为了解决这一问题,JPEG2000标准中将错误的小波系数以零替换,但这一替换将影响很多非零系数,造成某些高频成分丢失.本文结合JPEG2000码流的特点提出一种新的错误修复算法.该算法将根据子带间和未损坏的位平面信息来修复损坏的位平面数据.试验结果表明所提出算法比现有算法的信噪比改进1~3个dB.  相似文献   

3.
分布式视频编码器中Turbo码纠错所需要的校验位数量直接决定整个编码器的率失真(RD,Rate-Distortion)性能.分析了传统算法的次优化问题,提出一种新的基于比特概率优化的信道似然值计算算法,该算法首先将原始像素进行比特面分解,然后分别对每个比特面进行独立Turbo码编码及联合解码.解码端在已知已解码比特面的条件下,计算当前解码比特面更准确的信道似然值作为Turbo码的解码输入,减少解码所需要传输的校验位数量,提高编码器RD性能.实验结果表明所提算法能明显提升编码系统的RD性能.  相似文献   

4.
A vector quantizer maps ak-dimensional vector into one of a finite set of output vectors or "points". Although certain lattices have been shown to have desirable properties for vector quantization applications, there are as yet no algorithms available in the quantization literature for building quantizers based on these lattices. An algorithm for designing vector quantizers based on the root latticesA_{n}, D_{n}, andE_{n}and their duals is presented. Also, a coding scheme that has general applicability to all vector quantizers is presented. A four-dimensional uniform vector quantizer is used to encode Laplacian and gamma-distributed sources at entropy rates of one and two bits/sample and is demonstrated to achieve performance that compares favorably with the rate distortion bound and other scalar and vector quantizers. Finally, an application using uniform four- and eight-dimensional vector quantizers for encoding the discrete cosine transform coefficients of an image at0.5bit/pel is presented, which visibly illustrates the performance advantage of vector quantization over scalar quantization.  相似文献   

5.
In this paper, we investigate bit allocation strategies for a class of embedded wavelet video encoders. They take advantage of the precise control that such coders have over the bit-rate of each frame. We first show that a piecewise-linear model suits the rate × distortion characteristics of these encoders better than an exponential model, specially in low bit-rate applications. Then, we use an effective iterative procedure for dealing with the problem of frame dependency which yields improved rate × distortion results. Two types of embedded wavelet coders, using scalar and vector quantization, are tested. The results are encouraging, showing that the adoption of an adequate rate-control strategy can improve both objective and subjective quality of video sequences encoded using such embedded wavelet video encoders.  相似文献   

6.
Vector quantization (VQ) is an efficient data compression technique for low bit rate applications. However the major disadvantage of VQ is that its encoding complexity increases dramatically with bit rate and vector dimension. Even though one can use a modified VQ, such as the tree-structured VQ, to reduce the encoding complexity, it is practically infeasible to implement such a VQ at a high bit rate or for large vector dimensions because of the huge memory requirement for its codebook and for the very large training sequence requirement. To overcome this difficulty, a structurally constrained VQ called the sample-adaptive product quantizer (SAPQ) has recently been proposed. We extensively study the SAPQ that is based on scalar quantizers in order to exploit the simplicity of scalar quantization. Through an asymptotic distortion result, we discuss the achievable performance and the relationship between distortion and encoding complexity. We illustrate that even when SAPQ is based on scalar quantizers, it can provide VQ-level performance. We also provide numerical results that show a 2-3 dB improvement over the Lloyd-Max (1982, 1960) quantizers for data rates above 4 b/point  相似文献   

7.
实现感兴趣区域编码的通用部分位平面偏移法   总被引:10,自引:1,他引:9  
梁燕  刘文耀郑伟 《光电子.激光》2004,15(11):1334-13,381,342
提出一种通用的部分位平面偏移方法(GPBShift),可克服JPEG2000中定义的两种标准感兴趣区域(ROI)编码方法的局限性。与标准方法中将全部位平面用统一的偏移值进行移位不同,该方法将ROI系数和背景(BG)系数的位平面分别划分成两部分,进行不同的位平面偏移,以控制ROI和BG区的相对重要性。GPBShift方法兼容Maxshift、GBbBShift和PSBShift3种方法,并提供比上述3法更大的灵活性。它不仅能够在不传输任何形状信息的情况下,对任意形状的ROI进行编码,而且通过选择偏移值,能灵活调整ROI和BG区的相对压缩质量。此外,它能够根据不同的优先级,编码多个ROI区域。实验结果显示:该方法在低码率时,能提供比Maxshift方法更好的视觉质量,且比标准中的一般偏移方法(general scaling based method)具有更高的编码效率。  相似文献   

8.
A still-image encoder based on vector quantization (VQ) has been developed using 0.35-/spl mu/m triple-metal CMOS technology for encoding a high-resolution still image. The chip employs the needless calculation elimination method and the adaptive resolution VQ (AR-VQ) technique. The needless calculation elimination method can reduce computational cost of VQ encoding to 40% or less of the full-search VQ encoding, while maintaining the accuracy of full-search VQ. AR-VQ realizes a compression ratio of over 1/200 while maintaining image quality. The processor can compress a still image of 1600/spl times/2400 pixels within 1 s and operates at 66 MHz with power dissipation of 660 mW under 2.5-V power supply, which is 1000 times larger performance per unit power dissipation than the software implementation on current PCs.  相似文献   

9.
基于干涉多光谱图像成像原理和特点,提出一种干涉多光谱图像无损压缩算法。在压缩编码时,应充分利用图像的列相关性,采用基于列的比特平面编码和游程编码,对多光谱图像进行无损压缩,特别适于低分辨率的多光谱图像压缩。目前无损压缩算法的压缩比基本在1.6~2.4之间,本算法的压缩倍数一般可达到2倍以上,并且具有良好的抗误码性能。  相似文献   

10.
The wavelet transform, which provides a multiresolution representation of images, has been widely used in image compression. A new image coding scheme using the wavelet transform and classified vector quantisation is presented. The input image is first decomposed into a hierarchy of three layers containing ten subimages by the discrete wavelet transform. The lowest resolution low frequency subimage is scalar quantised with 8 bits/pixel. The high frequency subimages are compressed by classified vector quantisation to utilise the crosscorrelation among different resolutions while reducing the edge distortion and computational complexity. Vectors are constructed by combining the corresponding wavelet coefficients of different resolutions in the same orientation and classified according to the magnitude and the position of wavelet transform coefficients. Simulation results show that the proposed scheme has a better performance than those utilising current scalar or vector quantisation schemes  相似文献   

11.
This paper analyzes mathematically the effect of quantizer threshold imperfection commonly encountered in the circuit implementation of analog-to-digital (A/D) converters such as pulse code modulation (PCM) and sigma-delta (/spl Sigma//spl Delta/) modulation. /spl Sigma//spl Delta/ modulation, which is based on coarse quantization of oversampled (redundant) samples of a signal, enjoys a type of self-correction property for quantizer threshold errors (bias) that is not shared by PCM. Although "classical" /spl Sigma//spl Delta/ modulation is inferior to PCM in the rate-distortion sense, this robustness feature is believed to be one of the reasons why /spl Sigma//spl Delta/ modulation is preferred over PCM in A/D converters with imperfect quantizers. Motivated by these facts, other encoders are constructed in this paper that use redundancy to obtain a similar self-correction property, but that achieve higher order accuracy relative to bit rate compared to classical /spl Sigma//spl Delta/. More precisely, two different types of encoders are introduced that exhibit exponential accuracy in the bit rate (in contrast to the polynomial-type accuracy of classical /spl Sigma//spl Delta/) while possessing the self-correction property.  相似文献   

12.
It has been shown that with perfect feedback (CSIT), the optimal multiple input/multiple output (MIMO) transmission strategy is a cascade of channel encoder banks, power control matrix, and eigen-beamforming matrix. However, the feedback capacity requirement for perfect CSIT is 2n/sub T//spl times/n/sub R/, which is not scalable with respect to n/sub T/ or n/sub R/. In this letter, we shall compare the performance of two levels of partial power-feedback strategies, namely, the scalar symmetric feedback and the vector feedback, for MIMO block fading channels. Unlike quasi-static fading, variable rate encoding is not needed for block fading channels to achieve the optimal channel capacity.  相似文献   

13.
We study construction of structured regular quantizers for overcomplete expansions in /spl Ropf//sup N/. Our goal is to design structured quantizers which allow simple reconstruction algorithms with low complexity and which have good performance in terms of accuracy. Most related work to date in quantized redundant expansions has assumed that the same uniform scalar quantizer was used on all the expansion coefficients. Several approaches have been proposed to improve the reconstruction accuracy, with some of these methods having significant complexity. Instead, we consider the joint design of the overcomplete expansion and the scalar quantizers (allowing different step sizes) in such a way as to produce an equivalent vector quantizer (EVQ) with periodic structure. The construction of a periodic quantizer is based on lattices in /spl Ropf//sup N/ and the concept of geometrically scaled- similar sublattices. The periodicity makes it possible to achieve good accuracy using simple reconstruction algorithms (e.g., linear reconstruction or a small lookup table).  相似文献   

14.
We investigate the system performance of rational harmonic mode-locking in an erbium-doped fiber ring laser using the phase-plane technique of the nonlinear control engineering. Contributions from harmonic distortion, a Gaussian-like modulating signal, and its duty cycle to the system behavior are studied. We also demonstrate 660/spl times/ and 1230/spl times/ repetition rate multiplications on a 100-MHz pulse train generated from an active harmonically mode-locked fiber ring laser, and we hence achieve 66- and 123-GHz pulse operations by using the above-mentioned technique. It has been found out that the maximum obtainable rational harmonic order is limited by the harmonic distortion of the system as well as the pulse width of the generated signal, which in turn is determined by the duty cycle of the modulating signal. Furthermore, the rational harmonic order increases the complexity of the pulse formation process and hence challenges its stability.  相似文献   

15.
Bitplane coding is a common strategy used in current image coding systems to perform lossy, or lossy-to-lossless, compression. There exist several studies and applications employing bitplane coding that require estimators to approximate the distortion produced when data are successively coded and transmitted. Such estimators usually assume that coefficients are uniformly distributed in the quantization interval. Even though this assumption simplifies estimation, it does not exactly correspond with the nature of the signal. This work introduces new estimators to approximate the distortion produced by the successive coding of transform coefficients in bitplane image coders, which have been determined through a precise approximation of the coefficients' distribution within the quantization intervals. Experimental results obtained in three applications suggest that the proposed estimators are able to approximate distortion with very high accuracy, providing a significant improvement over state-of-the-art results.   相似文献   

16.
In designing a vector quantizer using a training sequence (TS), the training algorithm tries to find an empirically optimal quantizer that minimizes the selected distortion criteria using the sequence. In order to evaluate the performance of the trained quantizer, we can use the empirically minimized distortion that we obtain when designing the quantizer. Several upper bounds on the empirically minimized distortions are proposed with numerical results. The bound holds pointwise, i.e., for each distribution with finite second moment in a class. From the pointwise bounds, it is possible to derive the worst case bound, which is better than the current bounds for practical training ratio /spl beta/, the ratio of the TS size to the codebook size. It is shown that the empirically minimized distortion underestimates the true minimum distortion by more than a factor of (1-1/m), where m is the sequence size. Furthermore, through an asymptotic analysis in the codebook size, a multiplication factor [1-(1-e/sup -/spl beta//)//spl beta/]/spl ap/(1-1//spl beta/) for an asymptotic bound is shown. Several asymptotic bounds in terms of the vector dimension and the type of source are also introduced.  相似文献   

17.
This paper reviews various concepts and solutions of time-invariant and time-varying multirate filter banks. It discusses their performance for image and video coding at low bit rates, and their applicability in the mpeg-4 framework. Time-invariant multirate filter banks, and methods of design with different criteria appropriate for signal compression are first presented. Several procedures of quantization, namely scalar and lattice vector quantization, with bit allocation optimized in the rate-distortion sense, are used for the encoding of the subband signals. A technique of rate-constrained lattice vector quantization (rc-lvq), combined with a three components entropy coding, allow, together with distortion psychovi-sual weighting mechanisms to obtain significant visual improvements versus scalar quantization or the zerotree technique. However, time-invariant multirate filter banks, although efficient in terms of compression, are not well suited for content-based functionalities. Content-based features may require the ability to manipulate and thus encode a given region in the scene independently of the neighbouring regions, hence the use of transformations that can be adapted to arbitrary size bounded supports. Also, to increase the compression efficiency, one may want to adapt the transformation to the region characteristics, and thus use transform switching mechanisms, with soft or hard transitions. Three main classes of transformations can address these problems: shape-adaptive block transforms, transforms relying on signal extensions and transforms relying on time-varying multirate filter banks. These various solutions, with their methods of design, are reviewed. Emphasis is put on an extension of the SDF (symmetric delay factorization) technique which opens new perspectives in the design of time-bounded and time-varying filter banks. A region-adapted rate-distortion quantization algorithm has been used in the evaluation of the transformations compression efficiency. The coding results illustrate the interest of these techniques for compression but also for features such as quality scalability applied to selected regions of the image.  相似文献   

18.
Scanning orders of bitplane image coding engines are commonly envisaged from theoretical or experimental insights and assessed in practice in terms of coding performance. This paper evaluates classic scanning strategies of modern bitplane image codecs using several theoretical-practical mechanisms conceived from rate-distortion theory. The use of these mechanisms allows distinguishing those features of the bitplane coder that are essential from those that are not. This discernment can aid the design of new bitplane coding engines with some special purposes and/or requirements. To emphasize this point, a low-complexity scanning strategy is proposed. Experimental evidence illustrates, assesses, and validates the proposed mechanisms and scanning orders.  相似文献   

19.
During the last decade, there has been an increasing interest in the design of very fast wavelet image encoders focused on specific applications like interactive real-time image and video systems, running on power-constrained devices such as digital cameras, mobile phones where coding delay and/or available computing resources (working memory and power processing) are critical for proper operation. In order to reduce complexity, most of these fast wavelet image encoders are non-(SNR)-embedded and as a consequence, precise rate control is not supported. In this work, we propose some simple rate control algorithms for these kind of encoders and we analyze their impact to determine if, despite their inclusion, the global encoder is still competitive with respect to popular embedded encoders like SPIHT and JPEG2000. In this study we focus on the non-embedded LTW encoder, showing that the increase in complexity due to the rate control algorithm inclusion, maintains LTW competitive with respect to SPIHT and JPEG2000 in terms of R/D performance, coding delay and memory consumption.  相似文献   

20.
A CMOS image scanning signal processor which can be used for CCITT Group-4 facsimile has been developed. To obtain high-speed processing (5 MHz) and high-precision shading distortion correction (up to 70%), hybrid architecture of digital and analog techniques and parameter setting by software are combined. Image sensor and printer interfaces and a digital processor which can do linear zooming and data format conversion are built into a chip. The 6.5/spl times/7.8-mm chip is fabricated using 2.5 /spl mu/m CMOS technology and contains 25000 transistors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号