首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Consider a system that quantizes and encodes analog data for transmission across an additive noise Gaussian channel. To minimize distortion, the channel code rate must be chosen to optimally allocate the available transmission rate between lossy source coding and block channel coding. We establish tight upper and lower bounds on the channel code rate that minimizes the average distortion of a vector quantizer cascaded with a channel coder and a Gaussian channel, thus extending some recently obtained results for the binary-symmetric channel. The upper hounds are obtained by averaging, whereas the lower bounds are uniform, over all possible index assignments. Analytic expressions are derived for large and small signal-to-noise ratios, and also for large source vector dimension. As in the binary-symmetric channel, the optimal channel code rate is often substantially smaller than the channel capacity and the distortion decays exponentially with the number of channel uses. Exact exponents are derived  相似文献   

2.
The Hadamard transform-a tool for index assignment   总被引:2,自引:0,他引:2  
We show that the channel distortion for maximum-entropy encoders, due to noise on a binary-symmetric channel, is minimized if the vector quantizer can be expressed as a linear transform of a hypercube. The index assignment problem is regarded as a problem of linearizing the vector quantizer. We define classes of index assignments with related properties, within which the best index assignment is found by sorting, not searching. Two powerful algorithms for assigning indices to the codevectors of nonredundant coding systems are presented. One algorithm finds the optimal solution in terms of linearity, whereas the other finds a very good, but suboptimal, solution in a very short time  相似文献   

3.
Scalar quantizers with uniform decoders and channel-optimized encoders are studied for a uniform source on [0,1] and binary symmetric channels. Two families of affine index assignments are considered: the complemented natural code (CNC), introduced here, and the natural binary code (NBC). It is shown that the NBC never induces empty cells in the quantizer encoder, whereas the CNC can. Nevertheless, we show that the asymptotic distributions of quantizer encoder cells for the NBC and the CNC are equal and are uniform over a proper subset of the source's support region. Empty cells act as a form of implicit channel coding. An effective channel code rate associated with a quantizer designed for a noisy channel is defined and computed for the codes studied. By explicitly showing that the mean-squared error (MSE) of the CNC can be strictly smaller than that of the NBC, we also demonstrate that the NBC is suboptimal for a large range of transmission rates and bit error probabilities. This contrasts with the known optimality of the NBC when either both the encoder and decoder are not channel optimized, or when only the decoder is channel optimized.  相似文献   

4.
We consider joint source-channel coding for a memoryless Gaussian source and an additive white Gaussian noise (AWGN) channel. For a given code defined by an encoder-decoder pair (α, β), its dual code is obtained by interchanging the encoder and decoder: (β, α). It is shown that if a code (α, β) is optimal at rate p channel uses per source sample and if it satisfies a certain uniform continuity condition, then its dual code (β, α) is optimal for rate 1/ρ channel uses per source sample. Further, it is demonstrated that there is a code which is optimal but its dual code is not optimal. Finally, using random coding, we show that there is an optimal code which has an optimal dual. The duality concept is also presented for the cases of (i) binary memoryless equiprobable source and binary-symmetric channel (BSC), and (ii) colored Gaussian source and additive colored Gaussian noise (ACGN) channel  相似文献   

5.
Scalar quantizers with uniform encoders and channel optimized decoders are studied for uniform sources and binary symmetric channels. It is shown that the natural binary code (NBC) and folded binary code (FBC) induce point density functions that are uniform on proper subintervals of the source support, whereas the Gray code (GC) does not induce a point density function. The mean-squared errors (MSE) for the NBC, FBC, GC, and for randomly chosen index assignments are calculated and the NBC is shown to be mean-squared optimal among all possible index assignments, for all bit-error rates and all quantizer transmission rates. In contrast, it is shown that almost all index assignments perform poorly and have degenerate codebooks.  相似文献   

6.
Tradeoff between source and channel coding   总被引:4,自引:0,他引:4  
A fundamental problem in the transmission of analog information across a noisy discrete channel is the choice of channel code rate that optimally allocates the available transmission rate between lossy source coding and block channel coding. We establish tight bounds on the channel code rate that minimizes the average distortion of a vector quantizer cascaded with a channel coder and a binary-symmetric channel. Analytic expressions are derived in two cases of interest: small bit-error probability and arbitrary source vector dimension; arbitrary bit-error probability and large source vector dimension. We demonstrate that the optimal channel code rate is often substantially smaller than the channel capacity, and obtain a noisy-channel version of the Zador (1982) high-resolution distortion formula  相似文献   

7.
Asymptotically optimal zero-delay vector quantization in the presence of channel noise is studied using random coding techniques. First, an upper bound is derived for the average rth-power distortion of channel optimized k-dimensional vector quantization at transmission rate R on a binary symmetric channel with bit error probability ϵ. The upper bound asymptotically equals 2/sup -rRg(ϵ,k,r/). where k/(k +r) [1 - log2(l +2√(ϵ(1-ϵ))] ⩽g(ϵ,k,r)⩽1) for all ϵ⩾0, limϵ→0 g(ϵ,k,r)=1, and limk→∞g(ϵ,k,r)=1. Numerical computations of g(ϵ,k,r) are also given. This result is analogous to Zador's (1982) asymptotic distortion rate of 2-rR for quantization on noiseless channels. Next, using a random coding argument on nonredundant index assignments, a useful upper bound is derived in terms of point density functions, on the minimum mean squared error of high resolution, regular, vector quantizers in the presence of channel noise. The formula provides an accurate approximation to the distortion of a noisy channel quantizer whose codebook is arbitrarily ordered. Finally, it is shown that the minimum mean squared distortion of a regular, noisy channel VQ with a randomized nonredundant index assignment, is, in probability, asymptotically bounded away from zero  相似文献   

8.
A pseudo-Gray code is an assignment of n-bit binary indexes to 2" points in a Euclidean space so that the Hamming distance between two points corresponds closely to the Euclidean distance. Pseudo-Gray coding provides a redundancy-free error protection scheme for vector quantization (VQ) of analog signals when the binary indexes are used as channel symbols on a discrete memoryless channel and the points are signal codevectors. Binary indexes are assigned to codevectors in a way that reduces the average quantization distortion introduced in the reproduced source vectors when a transmitted index is corrupted by channel noise. A globally optimal solution to this problem is generally intractable due to an inherently large computational complexity. A locally optimal solution, the binary switching algorithm, is introduced, based on the objective of minimizing a useful upper bound on the average system distortion. The algorithm yields a significant reduction in average distortion, and converges in reasonable running times. The sue of pseudo-Gray coding is motivated by the increasing need for low-bit-rate VQ-based encoding systems that operate on noisy channels, such as in mobile radio speech communications  相似文献   

9.
We address the problem of bounding below the probability of error under maximum-likelihood decoding of a binary code with a known distance distribution used on a binary-symmetric channel (BSC). An improved upper bound is given for the maximum attainable exponent of this probability (the reliability function of the channel). In particular, we prove that the "random coding exponent" is the true value of the channel reliability for codes rate R in some interval immediately below the critical rate of the channel. An analogous result is obtained for the Gaussian channel.  相似文献   

10.
In this letter, we consider the lossy coding of a non‐uniform binary source based on GF(q)‐quantized low‐density generator matrix (LDGM) codes with check degree dc=2. By quantizing the GF(q) LDGM codeword, a non‐uniform binary codeword can be obtained, which is suitable for direct quantization of the non‐uniform binary source. Encoding is performed by reinforced belief propagation, a variant of belief propagation. Simulation results show that the performance of our method is quite close to the theoretic rate‐distortion bounds. For example, when the GF(16)‐LDGM code with a rate of 0.4 and block‐length of 1,500 is used to compress the non‐uniform binary source with probability of 1 being 0.23, the distortion is 0.091, which is very close to the optimal theoretical value of 0.074.  相似文献   

11.
Two common source-channel coding strategies, joint and tandem, are compared on the basis of distortion versus complexity and distortion versus delay by analyzing specific representatives of each when transmitting analog data samples across a binary symmetric channel. Channel-optimized transform coding is the joint source-channel strategy; transform coding plus Reed-Solomon coding is the tandem strategy. For each strategy, formulas for the mean-squared error, computational complexity, and delay are found and used to minimize distortion subject to constraints on complexity and delay, for source data modeled as Gauss-Markov. The results of such optimizations suggest there is a complexity threshold such that when the number of operations per data sample available for encoding and decoding is greater than this threshold, tandem coding is better, and when less, channel-optimized transform coding is better. Similarly, the results suggest there is also a delay threshold such that tandem coding is better than joint coding when the permissible encoding and decoding delay is greater than this threshold.  相似文献   

12.
We consider a joint source-channel coding system that protects an embedded bitstream using a finite family of channel codes with error detection and error correction capability. The performance of this system may be measured by the expected distortion or by the expected number of correctly decoded source bits. Whereas a rate-based optimal solution can be found in linear time, the computation of a distortion-based optimal solution is prohibitive. Under the assumption of the convexity of the operational distortion-rate function of the source coder, we give a lower bound on the expected distortion of a distortion-based optimal solution that depends only on a rate-based optimal solution. Then, we propose a local search (LS) algorithm that starts from a rate-based optimal solution and converges in linear time to a local minimum of the expected distortion. Experimental results for a binary symmetric channel show that our LS algorithm is near optimal, whereas its complexity is much lower than that of the previous best solution.  相似文献   

13.
Three hybrid digital-analog (HDA) systems, denoted by HDA-I, HDA* and HDA-II, for the coding of a memoryless discrete-time Gaussian source over a discrete-time additive memoryless Gaussian channel under bandwidth compression are studied. The systems employ simple linear coding in their analog component and superimpose their analog and digital signals before channel transmission. Information-theoretic upper bounds on the asymptotically optimal mean squared error distortion of the systems are obtained under both matched and mismatched channel conditions. Allocation schemes for distributing the channel input power between the analog and the digital signals are also examined. It is shown that systems HDA* and HDA-II can asymptotically achieve the optimal Shannon-limit performance under matched channel conditions. Low-complexity and low-delay versions of systems HDA-I and HDA-II are next designed and implemented without the use of error correcting codes. The parameters of these HDA systems, which employ vector quantization in conjunction with binary phase-shift keying modulation in their digital part, are optimized via an iterative algorithm similar to the design algorithm for channel-optimized vector quantizers. Both systems have low complexity and low delay, and guarantee graceful performance improvements for high CSNRs. For memoryless Gaussian sources the designed HDA-II system is shown to be superior to the HDA-I designed system. When applied to a Gauss-Markov source under Karhunen-Loeve processing, the HDA-I system is shown to provide considerably better performance.  相似文献   

14.
The common practice for achieving unequal error protection (UEP) in scalable multimedia communication systems is to design rate-compatible punctured channel codes before computing the UEP rate assignments. This paper proposes a new approach to designing powerful irregular repeat accumulate (IRA) codes that are optimized for the multimedia source and to exploiting the inherent irregularity in IRA codes for UEP. Using the end-to-end distortion due to the first error bit in channel decoding as the cost function, which is readily given by the operational distortion-rate function of embedded source codes, we incorporate this cost function into the channel code design process via density evolution and obtain IRA codes that minimize the average cost function instead of the usual probability of error. Because the resulting IRA codes have inherent UEP capabilities due to irregularity, the new IRA code design effectively integrates channel code optimization and UEP rate assignments, resulting in source-optimized channel coding or joint source-channel coding. We simulate our source-optimized IRA codes for transporting SPIHT-coded images over a binary symmetric channel with crossover probability p. When p = 0.03 and the channel code length is long (e.g., with one codeword for the whole 512 x 512 image), we are able to operate at only 9.38% away from the channel capacity with code length 132380 bits, achieving the best published results in terms of average peak signal-to-noise ratio (PSNR). Compared to conventional IRA code design (that minimizes the probability of error) with the same code rate, the performance gain in average PSNR from using our proposed source-optimized IRA code design is 0.8759 dB when p = 0.1 and the code length is 12800 bits. As predicted by Shannon's separation principle, we observe that this performance gain diminishes as the code length increases.  相似文献   

15.
16.
In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.  相似文献   

17.
18.
User cooperation is a powerful tool to combat fading and increase robustness for communication over wireless channels. Although it is doubtless a promising technique for enhancing channel reliability, its performance in terms of average source distortion is not clear since source-channel separation theorem fails under the most common nonergodic slow-fading channel assumption, when channel state information (CSI) is only available at the receiving terminals. This work sheds some light on the end-to-end performance of joint source-channel coding for cooperative relay systems in the high signal-to-noise ratio (SNR) regime. Considering distortion exponent as a figure of merit, we propose various strategies for cooperative source and channel coding that significantly improve the performance compared to the conventional scheme of source coding followed by cooperative channel coding. We characterize the optimal distortion exponent of a full-duplex relay channel for all bandwidth ratios. For the half-duplex relay channel, we provide an upper bound which is tight for small and large bandwidth ratios. We consider the effect of correlated side information on the distortion exponent as well.  相似文献   

19.
This paper investigates the tradeoffs between source coding, channel coding, and spreading in code-division multiple-access systems, operating under a fixed total bandwidth constraint. We consider two systems, each consisting of a uniform source with a uniform quantizer, a channel coder, an interleaver, and a direct-sequence spreading module. System A is quadrature phase-shift keyed modulated and has a linear block channel coder. A minimum mean-squared error receiver is also employed in this system. System B is binary phase-shift keyed modulated. Rate-compatible punctured convolutional codes and soft-decision Viterbi decoding are used for channel coding in system B. The two systems are analyzed for both an additive white Gaussian noise channel and a flat Rayleigh fading channel. The performances of the systems are evaluated using the end-to-end mean squared error. A tight upper bound for frame-error rate is derived for nonterminated convolutional codes for ease of analysis of system B. We show that, for a given bandwidth, an optimal allocation of that bandwidth can be found using the proposed method.  相似文献   

20.
The throughput performance of incremental redundancy (INR) schemes, based on short constraint length convolutional codes, is evaluated for the block-fading Gaussian collision channel. Results based on simulations and union bound computations are compared to estimates of the achievable throughput performance with random binary and Gaussian coding in the limit of large block lengths, obtained through information outage considerations. For low channel loads, it is observed that INR schemes with binary convolutional codes and limited block length may provide throughput close to the achievable performance for binary random coding. However, for these low loads, compared to binary random coding, Gaussian random coding may provide significantly better throughput performance, which prompts the use of larger modulation constellations. For high channel loads, a relatively large gap in throughput performance between binary convolutional codes and binary random codes indicates a potential for extensive performance improvement by alternative coding strategies. Only small improvements of the throughput have been observed by increasing the complexity through increased state convolutional coding.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号