首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 109 毫秒
1.
LDPC和SPIHT联合编码用于图像压缩与保护的方法   总被引:1,自引:0,他引:1  
提出了结合SPIHT(set partitionine in hierarchical trees)算法和LDPC(Low-Density Parity-Check)码的信源与信道联合编码方案。这一方案充分考虑了SPIHT算法和LDPC码的特性,根据信源编码后数据在重建时的重要程度,进行不同等纠错保护的信道编码方案。通过实验仿真的结果表明,论的方法能够使图像在高码率下达到较好的抗噪性能。  相似文献   

2.
具有纠错能力的小波图像混合编码   总被引:3,自引:0,他引:3       下载免费PDF全文
小波图像压缩算法SPIHT的提出使图像压缩编码技术大大前进了一步.但该算法在噪声信道环境下性能将大打折扣.本文提出了一种基于SPIHT的信源信道混合编码算法,利用而不是改变嵌入式码流的特点,以灵活可变的方式在完成信源编码的同时完成信道编码,大大提高了SPIHT算法抵御干扰的能力,为该算法进入实用化创造了条件.  相似文献   

3.
邹俊  杨济安 《通信技术》2008,41(2):78-80
文中首先对与信源信道联合编码技术相关的一些必要背景知识作了简要介绍.接着提出了一种无线信道中基于小波SPIHT算法的联合信源信道编码方法.主要思想是信源部分采用基于小波SPIHT的编码方法,而信道部分采用RCPC编码对SPIHT输出码流按重要性进行不等错误保护,并加入交织以增强对突发错误的抗击能力.实验表明本方法能够提高系统的性能和编码效率,可以应用于信噪比波动较大的信道.  相似文献   

4.
本文介绍了未来第四代移动通信系统中可能的联合编码框架,在分析基于小波变换的SPIHT算法和Turbo编 码算法的基础上,给出了一种改进的无线传输系统结构。仿真结果表明,这种利用信源特性进行信道编译码的方法能够有效 地降低时延,提高信道编码的纠错性能,适合于未来多媒体无线传输业务的需求。  相似文献   

5.
为了提高解码前传半双工中继通信系统的编码增益,提出了一种联合LDPC码编码结构及其度分布优化方法。该结构视信源和中继子码为联合LDPC码的一部分,目的端根据从信源和中继接收的消息进行联合译码,同时获得信源和中继的信息。为了分析联合LDPC码的渐进性能,推导了AWGN信道下联合LDPC码的高斯近似密度进化算法。结合译码收敛条件和度分布约束关系,提出联合LDPC码的度分布优化问题。仿真结果表明:联合LDPC码的渐进性能及误码性能优于BE-LDPC码和独立处理(SP)码。  相似文献   

6.
在数字电视地面广播国家标准中的前向纠错编码(FEC)部分,采用BCH码和LDPC码进行级联的编码方案。本文给出了国家标准中准循环LDPC码的参考编码算法,探讨了该码对应的生成矩阵特点及编码步骤,编程实现了编译码算法,并进行了仿真,结果表明国标中的准循环LDPC码在AWGN信道中极低信噪比情况下仍具有较好的纠错性能。  相似文献   

7.
乔良  郑辉 《信号处理》2014,30(10):1170-1175
针对自同步扰码系统的联合信源信道译码问题,本文将自同步扰码看作一种特殊的卷积编码,提出了类似卷积译码的软输入软输出(SISO)自同步去扰算法。该算法利用信源冗余更新扰码序列的外信息,在信道译码时作为先验信息进行译码,实现了自同步去扰与信道译码之间的软信息交互,充分利用了信源冗余信息,使得接收系统的性能得到了有效提升。仿真结果表明,在TPC编码条件下,当信源冗余度为70%时,联合信源信道译码的性能增益约为4.1dB。相比于单一纠错编码系统,当通信系统中存在自同步扰码时,联合信源信道译码具有更大的性能增益。   相似文献   

8.
基于滑窗置信传播算法的联合信源信道编码   总被引:1,自引:1,他引:0  
鹿增辉  方勇  霍迎秋 《电视技术》2015,39(11):99-103
针对联合信源信道编码中信源统计特性和信道噪声参数未知情况下,解码算法性能急剧下降的问题,提出一种基于滑窗置信传播算法(SWBP)的联合信源信道编码,简称滑窗算法.研究采用不规则重复累积(IRA)码来实现基于滑窗置信传播算法的联合信源信道编码,IRA码可以取得与低密度奇偶校验(LDPC)码同样优越的性能,但编码复杂度远远低于LDPC码.在解码端引入滑动窗口,通过实时地估计不断变化的信源统计特性和信道噪声参数,从而提高解码速率.实验结果表明该算法具有接近相关参数已知情况下的理想性能、不依赖于初始参数、复杂度低且易于实现等优点.  相似文献   

9.
对噪声信道上的图像传输方法进行了研究,提出了一种新的基于不等纠错保护的图像传输方法,该方法在编码端利用纠错算术码对SPIHT码流进行不等纠错保护,根据SPIHT码流各个不同重要程度的部分采用不同禁用区间的纠错算术码进行不同程度的差错保护,相比传统的基于不等纠错保护图像传输方法而言,可获得近似连续可变的编码码率;在解码端,采用堆栈序列估计算法进行信道估计后再进行SPIHT解码,重建图像.实验结果表明,与经典的Guionnet不等纠错保护传输方法以及分离编码传输方法相比,所提出的传输方法具有较为明显的性能增益.  相似文献   

10.
二进制LDPC码译码改进算法主要是提升硬判决性能或者降低软判决计算复杂度。本文应用高斯-马尔可夫随机场(Markov Random Field,MRF)模型实现信源参数估计,对信道译码端接收的比特序列进行对数似然比修正,在译码时加入信源的残留冗余信息来增加译码器的纠错能力。信源估计修正系数自适应可变,是由误码率参数调控。在计算复杂度不变的情况下,基于MRF的LDPC码译码算法有效提高了译码性能,降低误比特率  相似文献   

11.
谢亮  乔秦宝 《电视技术》2005,(9):13-15,18
针对多媒体无线通信的特点,提出了一种结合自适应纠错技术与联合优化技术的新策略,充分考虑了联合编码的自适应纠错特性和联合优化特性,分析了优化的信源编码算法和优化的信道编码算法.实验仿真结果表明,该方法能够使图像数据在高码率下达到较好的抗噪性能.  相似文献   

12.
低密度校验码及其在图像传输中的应用   总被引:2,自引:0,他引:2  
低密度校验(Low-Density Parity-Check,LDPC)码是一种基于图和迭代译码的信道编码方案,性能非常接近Shannon极限且实现复杂度低,具有很强的纠错抗干扰能力。该文深入研究了LDPC码的编码和译码基本原理,并将其应用于移动衰落信道图像的传输中,仿真结果表明LDPC码能为图像传输带来显著的性能提高,且系统复杂度低,译码时延短。  相似文献   

13.
基于LDPC码率自适应的HARQ系统   总被引:1,自引:0,他引:1  
在对LDPC码及常用的不等差错保护策略分析的基础上,提出了一种新的码率调整策略,并对该策略的BER和迭代性能进行了分析,证明了其有效性.在该策略的基础上设计了一种基于不等差错保护方法的结合SPIHT和LDPC码的HARQ无线图像传输方案,通过计算机仿真表明该系统的能够起到较好的图像传输保护效果.  相似文献   

14.
The use of coding error control is an integral part of the design of modern communication systems. Capacity-approaching codes such as turbo and LDPC codes, discovered or rediscovered in the past decade, offset near-Shannon-limit performance on the AWGN channel with rather low implementation complexity and are therefore increasingly being applied for error control in various fields of data communications. This article describes a generic multilevel modulation and coding scheme based on the use of turbo-like, or LDPC codes for DSL system. It is shown that such codes provide significant gains in performance and allow an increase in data rate and/or loop reach that can be instrumental to the widespread deployment of future DSL services. Such techniques are also suitable for general multilevel modulation system in other application areas.  相似文献   

15.
We propose a novel combined source and channel coding scheme for image transmission over noisy channels. The main feature of the proposed scheme is a systematic decomposition of image sources so that unequal error protection can be applied according to not only bit error sensitivity but also visual content importance. The wavelet transform is adopted to hierarchically decompose the image. The association between the wavelet coefficients and what they represent spatially in the original image is fully exploited so that wavelet blocks are classified based on their corresponding image content. The classification produces wavelet blocks in each class with similar content and statistics, therefore enables high performance source compression using the set partitioning in hierarchical trees (SPIHT) algorithm. To combat the channel noise, an unequal error protection strategy with rate-compatible punctured convolutional/cyclic redundancy check (RCPC/CRC) codes is implemented based on the bit contribution to both peak signal-to-noise ratio (PSNR) and visual quality. At the receiving end, a postprocessing method making use of the SPIHT decoding structure and the classification map is developed to restore the degradation due to the residual error after channel decoding. Experimental results show that the proposed scheme is indeed able to provide protection both for the bits that are more sensitive to errors and for the more important visual content under a noisy transmission environment. In particular, the reconstructed images illustrate consistently better visual quality than using the single-bitstream-based schemes.  相似文献   

16.
Due to its excellent rate–distortion performance, set partitioning in hierarchical trees (SPIHT) has become the state-of-the-art algorithm for image compression. However, the algorithm does not fully provide the desired features of progressive transmission, spatial scalability and optimal visual quality, at very low bit rate coding. Furthermore, the use of three linked lists for recording the coordinates of wavelet coefficients and tree sets during the coding process becomes the bottleneck of a fast implementation of the SPIHT. In this paper, we propose a listless modified SPIHT (LMSPIHT) approach, which is a fast and low memory image coding algorithm based on the lifting wavelet transform. The LMSPIHT jointly considers the advantages of progressive transmission, spatial scalability, and incorporates human visual system (HVS) characteristics in the coding scheme; thus it outperforms the traditional SPIHT algorithm at low bit rate coding. Compared with the SPIHT algorithm, LMSPIHT provides a better compression performance and a superior perceptual performance with low coding complexity. The compression efficiency of LMSPIHT comes from three aspects. The lifting scheme lowers the number of arithmetic operations of the wavelet transform. Moreover, a significance reordering of the modified SPIHT ensures that it codes more significant information belonging to the lower frequency bands earlier in the bit stream than that of the SPIHT to better exploit the energy compaction of the wavelet coefficients. HVS characteristics are employed to improve the perceptual quality of the compressed image by placing more coding artifacts in the less visually significant regions of the image. Finally, a listless implementation structure further reduces the amount of memory and improves the speed of compression by more than 51% for a 512×512 image, as compared with that of the SPIHT algorithm.  相似文献   

17.
高效的差错控制编码技术(ECC)可以增强无线传感器网络传输稳定性、网络的能量利用效率。为了充分利用无线传感器网络中蕴含的分集资源应对恶劣信道环境导致的高差错概率,该文研究了基于根校验全分集LDPC码的差错控制编码技术。首先,提出在分簇无线传感器网络中,基于根校验全分集LDPC码的编码方案;其次,设计了适用于所提方案的速率兼容全分集LDPC码字结构。最后,分析了所提编码系统的能效。仿真结果表明,在信道条件较差的环境中(仿真中,信道噪声大于410-4 mW),采用该文的编码方案,能够显著提高无线传感器网络的能效。  相似文献   

18.
We compare convolutional codes and LDPC codes with respect to their decoding performance and their structural delay, which is the inevitable delay solely depending on the structural properties of the coding scheme. Besides the decoding performance, the data delay caused by the channel code is of great importance as this is a crucial factor for many applications. Convolutional codes are known to show a good performance while imposing only a very low latency on the data. LDPC codes yield superior decoding performance but impose a larger delay due to the block structure. The results obtained by comparison will also be related to theoretical limits obtained from random coding and the sphere packing bound. It will be shown that convolutional codes are still the first choice for applications for which a very low data delay is required and the bit error rate is the considered performance criterion. However, if one focuses on a low signal-to-noise ratio or if the obtained frame error rate is the base for comparison, LDPC codes compare favorably.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号