首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This work addresses the problem of designing turbo codes for nonuniform binary memoryless or independent and identically distributed (i.i.d.) sources over noisy channels. The extrinsic information in the decoder is modified to exploit the source redundancy in the form of nonuniformity; furthermore, the constituent encoder structure is optimized for the considered nonuniform i.i.d. source to further enhance the system performance. Some constituent encoders are found to substantially outperform Berrou's (1996) (37, 21) encoder. Indeed, it is shown that the bit error rate (BER) performance of the newly designed turbo codes is greatly improved as significant coding gains are obtained  相似文献   

2.
In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes.  相似文献   

3.
Exploiting the residual redundancy in a source coder output stream during the decoding process has been proven to be a bandwidth-efficient way to combat noisy channel degradations. This redundancy can be employed to either assist the channel decoder for improved performance or design better source decoders. In this work, a family of solutions for the asymptotically optimum minimum mean-squared error (MMSE) reconstruction of a source over memoryless noisy channels is presented when the redundancy in the source encoder output stream is exploited in the form of a /spl gamma/-order Markov model (/spl gamma//spl ges/1) and a delay of /spl delta/,/spl delta/>0, is allowed in the decoding process. It is demonstrated that the proposed solutions provide a wealth of tradeoffs between computational complexity and the memory requirements. A simplified MMSE decoder which is optimized to minimize the computational complexity is also presented. Considering the same problem setup, several other maximum a posteriori probability (MAP) symbol and sequence decoders are presented as well. Numerical results are presented which demonstrate the efficiency of the proposed algorithms.  相似文献   

4.
We propose a novel combined source and channel coding scheme for image transmission over noisy channels. The main feature of the proposed scheme is a systematic decomposition of image sources so that unequal error protection can be applied according to not only bit error sensitivity but also visual content importance. The wavelet transform is adopted to hierarchically decompose the image. The association between the wavelet coefficients and what they represent spatially in the original image is fully exploited so that wavelet blocks are classified based on their corresponding image content. The classification produces wavelet blocks in each class with similar content and statistics, therefore enables high performance source compression using the set partitioning in hierarchical trees (SPIHT) algorithm. To combat the channel noise, an unequal error protection strategy with rate-compatible punctured convolutional/cyclic redundancy check (RCPC/CRC) codes is implemented based on the bit contribution to both peak signal-to-noise ratio (PSNR) and visual quality. At the receiving end, a postprocessing method making use of the SPIHT decoding structure and the classification map is developed to restore the degradation due to the residual error after channel decoding. Experimental results show that the proposed scheme is indeed able to provide protection both for the bits that are more sensitive to errors and for the more important visual content under a noisy transmission environment. In particular, the reconstructed images illustrate consistently better visual quality than using the single-bitstream-based schemes.  相似文献   

5.
We propose a fast trellis-based rate-allocation algorithm for robust transmission of progressively coded images over noisy channels. The algorithm, which is an improved version of a similar algorithm by Banister et al., is based on the application of the Viterbi algorithm to a search trellis. This trellis is a substantially trimmed version of the one used by Banister et al.. The proposed algorithm is applied to images encoded by the set partitioning in hierarchical trees and the Joint Photographers Expert Group 2000 for transmission over binary symmetric channels. For different total bit budgets and channel parameters, speed-up factors of up to about three orders of magnitude are achieved.  相似文献   

6.
We describe a new way to organize a full-search vector quantization codebook so that images encoded with it can be sent progressively and have resilience to channel noise. The codebook organization guarantees that the most significant bits (MSBs) of the codeword index are most important to the overall image quality and are highly correlated. Simulations show that the effective channel error rates of the MSBs can be substantially lowered by implementing a maximum a posteriori (MAP) detector similar to one suggested by Phamdo and Farvardin (see IEEE Trans. Inform. Theory, vol.40, no.1, p.156-193, 1994). The performance of the scheme is close to that of pseudo-gray coding at lower bit error rates and outperforms it at higher error rates. No extra bits are used for channel error correction.  相似文献   

7.
Joint source/channel decoders that use the residual redundancy in the source are investigated for differential pulse code modulation (DPCM) picture transmission over a binary symmetric channel. Markov sequence decoders employing the Viterbi algorithm that use first-order source statistics are reviewed, and generalized for decoders that use second-order source statistics. To make optimal use of the source correlation in both horizontal and vertical directions, it is necessary to generalize the conventional Viterbi decoding algorithm for a one higher-dimensional trellis. The paths through the trellis become two-dimensional "sheets", thus, the technique is coined "sheet decoding". By objective [reconstruction signal-to-noise ratio (SNR)] and subjective measure, it is found that the sheet decoders outperform the Markov sequence decoders that use a first-order Markov model, and outperform two other known decoders (modified maximum a posteriori probability and maximal SNR) that use a second-order Markov model. Moreover, it is found that the use of a simple rate-2/3 block code in conjunction with Markov model-aided decoding (MMAD) offers significant performance improvement for a 2-bit DPCM system. For the example Lenna image, it is observed that the rate-2/3 block code is superior to a rate-2/3 convolutional code for channel-error rates higher than 0.035. The block code is easily incorporated into any of the MMAD DPCM systems and results in a 2-bit MMAD DPCM system that significantly outperforms the noncoded 3-bit MMAD DPCM systems for channel-error rates higher than 0.04.  相似文献   

8.
The problem of DPCM picture transmission over noisy channels is considered. It is well known that DPCM systems are very sensitive to channel errors. The goal in this work is to build robustness against channel errors. Three methods are proposed in this paper and are obtained by modeling the encoded signal as a Markov sequence. First, an optimum method for decoding correlated sequences is derived, and it is shown to require Viterbi decoding. Then, a modified MAP method (MMAP) for Markov sequences is described. A maximal signal-to-noise (MSNR) receiver for DPCM systems is also developed that minimizes the distortion power due to channel errors. The appropriate cost matrix for this receiver is computed. These methods are applied to DPCM picture transmission over noisy channels and are compared with a another method. The SNR graphs, as well as subjective examination of the received pictures, demonstrate that the proposed procedures are quite effective and superior to that method. Among the proposed methods, the MSNR receiver was found to be more effective than the others for a given order of the Markov model. It is observed that the proposed methods are most beneficial for low detail pictures.  相似文献   

9.
This paper presents an error control scheme for transmitting vector-quantized data over noisy channels. First, we review a self-organizing feature map (SOFM) based approach to construct the mapping from the codebook of the quantizer to the channel signal set of the communication system. This approach is robust with respect to channel noise even if we do not use any error control scheme in the communication system. Afterwards, a new trellis type quantizer, namely, the trellis coded Kohonen (1984) map (TCKM), is presented. The design process of the TCKM, which is based on the neighborhood structure of SOFMs, is simpler than that of conventional trellis coded vector quantizers (TCVQs). Simulation results show that the performance of TCKMs is comparable with that of TCVQs. Last, we introduce the error control scheme based on the concepts of the above two applications of SOFMs. Simulation results show that the proposed error control scheme is more robust with respect to channel noise. The advantage of our approaches mentioned above is that the design processes of transmission systems are predefined before the construction of the codebook. Thus, different codebooks with the same neighborhood structure can share the same design  相似文献   

10.
Reliable transmission of images and video over wireless networks must address both potentially limited bandwidths and the possibilities of loss. When bandwidth sufficient to transmit the bit stream is unavailable on a single channel, the data can be partitioned over multiple channels with possibly unequal bandwidths and error characteristics at the expense of more complex channel coding (i.e., error correction). This paper addresses the problem of efficiently channel coding and partitioning pre-encoded image and video bit streams into substreams for transmission over multiple channels with unequal and time-varying characteristics. Within channels, error protection is unequally applied based on both data decoding priority and channel packet loss rates, while cross-channel coding addresses channel failures. In comparison with conventional product codes, the resulting product code does not restrict the total encoded data to a rectangular structure; rather, the data in each channel is adaptively coded according to the channel's varying conditions. The coding and partitioning are optimized to achieve two performance criteria: maximum bandwidth efficiency and minimum delay. Simulation results demonstrate that this approach is effective under a variety of channel conditions and for a broad range of source material.  相似文献   

11.
The high compression efficiency and various features provided by JPEG2000 make it attractive for image transmission purposes. A novel joint source/channel coding scheme tailored for JPEG2000 is proposed in this paper to minimize the end-to-end image distortion within a given total transmission rate through memoryless channels. It provides unequal error protection by combining the forward error correction capability from channel codes and the error detection/localization functionality from JPEG2000 in an effective way. The proposed scheme generates quality scalable and error-resilient codestreams. It gives competitive performance with other existing schemes for JPEG2000 in the matched channel condition case and provides more graceful quality degradation for mismatched cases. Furthermore, both fixed-length source packets and fixed-length channel packets can be efficiently formed with the same algorithm.  相似文献   

12.
Joint source-channel coding for stationary memoryless and Gauss-Markov sources and binary Markov channels is considered. The channel is an additive-noise channel where the noise process is an Mth-order Markov chain. Two joint source-channel coding schemes are considered. The first is a channel-optimized vector quantizer-optimized for both source and channel. The second scheme consists of a scalar quantizer and a maximum a posteriori detector. In this scheme, it is assumed that the scalar quantizer output has residual redundancy that can be exploited by the maximum a posteriori detector to combat the correlated channel noise. These two schemes are then compared against two schemes which use channel interleaving. Numerical results show that the proposed schemes outperform the interleaving schemes. For very noisy channels with high noise correlation, gains of 4-5 dB in signal-to-noise ratio are possible  相似文献   

13.
We propose a distortion optimal rate allocation algorithm for robust transmission of embedded bitstreams over noisy channels. The algorithm is based on the backward application of a Viterbi-like algorithm to a search trellis, and can be applied to both scenarios of fixed and variable channel packet length problems, referred to as FPP and VPP, respectively. For the VPP, the complexity of the algorithm is comparable to the well-known dynamic programming approach of Chande and Farvardin. For the FPP, where no low-complexity algorithm is known, the complexity of the proposed algorithm is O(N/sup 2/), where N is the number of transmitted packets.  相似文献   

14.
We address the problem of robust coding in which the signal information should be preserved in spite of intrinsic noise in the representation. We present a theoretical analysis for 1- and 2-D cases and characterize the optimal linear encoder and decoder in the mean-squared error sense. Our analysis allows for an arbitrary number of coding units, thus including both under- and over-complete representations, and provides insights into optimal coding strategies. In particular, we show how the form of the code adapts to the number of coding units and to different data and noise conditions in order to achieve robustness. We also present numerical solutions of robust coding for high-dimensional image data, demonstrating that these codes are substantially more robust than other linear image coding methods such as PCA, ICA, and wavelets.  相似文献   

15.
The paper addresses the issue of robust and joint source-channel decoding of arithmetic codes. We first analyze dependencies between the variables involved in arithmetic coding by means of the Bayesian formalism. This provides a suitable framework for designing a soft decoding algorithm that provides high error-resilience. It also provides a natural setting for "soft synchronization", i.e., to introduce anchors favoring the likelihood of "synchronized" paths. In order to maintain the complexity of the estimation within a realistic range, a simple, yet efficient, pruning method is described. The algorithm can be placed in an iterative source-channel decoding structure, in the spirit of serial turbo codes. Models and algorithms are then applied to context-based arithmetic coding widely used in practical systems (e.g., JPEG-2000). Experimentation results with both theoretical sources and with real images coded with JPEG-2000 reveal very good error resilience performances.  相似文献   

16.
In this letter, we present a novel product channel coding and decoding scheme for image transmission over noisy channels. Two convolutional codes with at least one recursive systematic convolutional code are employed to construct the product code. Received data are decoded alternately in two directions. A constrained Viterbi algorithm is proposed to exploit the detection results of cyclic redundancy check codes so that both reduction in error patterns and fast decoding speed are achieved. Experiments with image data coded by the algorithm of set partitioning in hierarchical trees exhibit results better than those currently reported in the literature.  相似文献   

17.
For a class of generalized decision strategies, which afford the possibility of erasure or variable-size list decoding, asymptotically tight upper and lower error bounds are obtained for orthogonal signals in additive white Gaussian noise channels. Under the hypothesis that a unique signal set is asymptotically optimal for the entire class of strategies, these bounds are shown to hold for the optimal set in both the white Gaussian channel and the class of input-discrete very noisy memoryless channels.  相似文献   

18.
This paper presents a joint forward error correction (FEC) and error concealment (EC) scheme to enhance the quality of a compressed video signal transmitted over a noisy channel. A multiple candidate likelihood (MCL) channel decoding strategy is used in conjunction with redundancy in the compressed video (syntax validity and spatial discontinuity) to select the best-detected signal.

Simulation results on both objective and subjective performance measures indicate a significant improvement provided by the proposed scheme.  相似文献   


19.
对信源编码中的残留冗余在联合编码中的作用进行了研究,提出了一个在噪声信道中对可变长信源编码码流传输提供有效差错保护的联合信源信道编码方法,该方法利用信源编码器输出中的残留冗余为传输码流提供差错保护。与SayoodK提出的系统相比,该方法是基于改进的联合卷积软解码以及采用非霍夫曼码的通用可变长码,更接近于一般的信源和信道编码方法,并且信源符号集的大小也不受限制。仿真表明,所提出的联合编码方法可获得比传统的分离编码方法更高的性能增益。  相似文献   

20.
刘军清  孙军 《通信学报》2006,27(12):32-36
对信源编码中的残留冗余在联合编码中的作用进行了研究,提出了一个在噪声信道中对可变长信源编码码流传输提供有效差错保护的联合信源信道编码方法,该方法利用信源编码器输出中的残留冗余为传输码流提供差错保护。与Sayood K提出的系统相比,该方法是基于改进的联合卷积软解码以及采用非霍夫曼码的通用可变长码,更接近于一般的信源和信道编码方法,并且信源符号集的大小也不受限制。仿真表明,所提出的联合编码方法可获得比传统的分离编码方法更高的性能增益。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号