首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
黄胜  曹志雄  郑秀凤 《电讯技术》2021,61(11):1385-1390
在中短码长条件下极化码信道极化不完全,在奇偶校验级联码的译码过程中容易发生错误传播影响译码算法性能.为了降低错误传播对奇偶校验级联性能的影响,设计了一种新型奇偶校验级联方法.该方法通过高斯估计选取部分关键易错信息比特进行非均匀分段校验,能够有效降低错误传播对奇偶校验性能的影响,同时与循环冗余校验级联选择正确路径,可以提升译码算法在大列表和高信噪比条件下的译码性能.仿真表明应用新型级联码相比于CA-SCL(Cyclic-redundancy-check Aided Successive Cancellation List)平均能提升0.1~0.15 dB译码性能.此外,新型级联码结合自适应算法,可以利用译码算法性能的提升使自适应算法在更小列表下译码成功,降低自适应算法在较低信噪比下6%~25%的译码复杂度.  相似文献   

2.
李正杰  刘顺兰  张旭 《电信科学》2022,38(7):96-105
极化码作为一种线性分组码,具有较低的编码复杂度和确定的构造,但当其为中短码长时,性能会有所降低。提出一种基于分段循环冗余校验(cyclic redundancy check,CRC)码级联Hash极化码的设计方法,该方法在原有Hash极化码(Hash-Polar)的基础上,采用CRC分段校验进行双校验,分段CRC码在译码过程中能辅助路径度量,即对译码路径进行修饰,以此提高路径选择的可靠性,提高性能;另外,分段校验是将校验码分散地添加到输入的信息序列中,译码时对于CRC不通过的情况,可提前终止译码路径以省去不必要的译码计算量。最后,译码结束时,Hash校验码对修饰后的L条路径进行校验,选出最佳译码路径。仿真结果表明,所提出的设计方法比 CRC 辅助的 Hash 极化码(Hash-CRC-Polar)误码性能更优异。在高斯信道下,当码长为 128 bit、码率为 1/2、误码率为 10-3时,所提出的基于分段 CRC 校验码的 Hash 极化码比Hash-CRC-Polar获得了约0.25 dB的增益。  相似文献   

3.
极化码作为一种纠错码,具有较好的编译码性能,已成为5G短码控制信道的标准编码方案。但在码长较短时,其性能不够优异。提出一种基于增强奇偶校验码级联极化码的新型编译码方法,在原有的奇偶校验位后设立增强校验位,对校验方程中信道可靠度较低的信息位进行双重校验,辅助奇偶校验码在译码过程中对路径进行修剪,以此提高路径选择的可靠性。仿真结果表明,在相同信道、相同码率码长下,本文提出的新型编译码方法比循环冗余校验(cyclic redundancy check,CRC)码级联极化码、奇偶校验(parity check,PC)码级联极化码误码性能更优异。在高斯信道下,当码长为128、码率为1/2、误码率为10-3时,本文提出的基于增强PC码级联的极化码比PC码级联的极化码获得了约0.3 dB增益,与CRC辅助的极化码相比获得了约0.4 dB增益。  相似文献   

4.
A novel scheme for designing polar codes with specific decoding schemes in the additive white Gaussian noise channel is presented in this paper. The code construction strategy is built on the genetic algorithm (GA) where successive evolution of populations (or group of information sets) leads toward fittest candidate to attain finest error-rates. In this work, it is shown that better error-rates can be attained by both successive cancelation list decoding using GA with no cyclic redundancy check (CRC) and belief propagation decoding using GA with no CRC and no list, compared to existing polar decoding schemes. Our proposed polar code design scheme using GA has the ability to attain a target block error rates with the least possible SNR and using no additional CRC by exploiting least belief propagation iterations or lesser successive cancelation list list size with no self-alterations in the decoding algorithm.  相似文献   

5.
The conventional list Viterbi algorithm (LVA) produces a list of the L best output sequences over a certain block length in decoding a terminated convolutional code. We show in this paper that the LVA with a sufficiently long list is an optimum maximum-likelihood decoder for the concatenated pair of a convolutional code and a cyclic redundancy check (CRC) block code with error detection. The CRC is used to select the output. New LVAs for continuous transmission are proposed and evaluated, where no termination bits are required for the convolutional code for every CRC block. We also present optimum and suboptimum LVAs for tailbiting convolutional codes. Convolutional codes with Viterbi decoding were proposed for so-called hybrid in band on channel (hybrid IBOC) systems for digital audio broadcasting compatible with the frequency modulation band. For high-quality audio signals, it is beneficial to use error concealment/error mitigation techniques to avoid the worst type of channel errors. This requires a reliable error flag mechanism (error detection feature) in the channel decoder. A CRC on a block of audio information bits provides this mechanism. We demonstrate how the LVA can significantly reduce the flag rate compared to the regular Viterbi algorithm (VA) for the same transmission parameters. At the expense of complexity, a receiver optional LVA can reduce the flag rate by more than an order of magnitude. The difference in audio quality is dramatic. The LVA is backward compatible with a VA  相似文献   

6.
李纯  童新海 《通信技术》2015,48(1):19-22
极化码连续删除译码算法性能和传统的LDPC码存在一定差距。序列连续删除算法(SCL)的提出极大地改善译码性能,是极化码推向实际应用中的重要一步。但是该算法复杂度较高,延迟大。改进的序列连续删除(SCL)译码算法是基于改善极化码码长受限的情况,文中描述SCL算法是通过码树上的搜索序列路径来表示译码过程。改进的算法通过减少译码算法在码树上的序列路径来降低时间和空间复杂度。通过仿真表明,改进的算法有效地降低了译码的复杂度同时在性能上也接近最大似然(ML)译码算法。  相似文献   

7.
In order to reduce the number of redundant candidate codewords generated by the fast successive cancellation list (FSCL) decoding algorithm for polar codes, a simplified FSCL decoding algorithm based on critical sets (CS-FSCL) of polar codes is proposed. The algorithm utilizes the number of information bits belonging to the CS in the special nodes, such as Rate-1 node, repetition (REP) node and single-parity-check (SPC) node, to constrain the number of the path splitting and avoid the generation of unnecessary candidate codewords, and thus the latency and computational complexity are reduced. Besides, the algorithm only flips the bits corresponding to the smaller log-likelihood ratio (LLR) values to generate the sub-maximum likelihood (sub-ML) decoding codewords and ensure the decoding performance. Simulation results show that for polar codes with the code length of 1 024, the code rates of 1/4, 1/2 and 3/4, the proposed CS-FSCL algorithm, compared with the conventional FSCL decoding algorithm, can achieve the same decoding performance, but reduce the latency and computational complexity at different list sizes. Specifically, under the list size of L=8, the code rates of R=1/2 and R=1/4, the latency is reduced by 33% and 13% and the computational complexity is reduced by 55% and 50%, respectively.  相似文献   

8.
针对极化码串行抵消列表比特翻转(Successive Cancellation List Bit-Flip, SCLF)译码算法复杂度较高的问题,提出一种基于分布式奇偶校验码的低复杂度极化码SCLF译码(SCLF Decoding Algorithm for Low-Complexity Polar Codes Based on Distributed Parity Check Codes, DPC-SCLF)算法。与仅采用循环冗余校验(Cyclic Redundancy Check, CRC)码校验的SCLF译码算法不同,该算法首先利用极化信道偏序关系构造关键集,然后采用分布式奇偶校验(Parity Check, PC)码与CRC码结合的方式对错误比特进行检验、识别和翻转,提高了翻转精度,减少了重译码次数。此外,在译码时利用路径剪枝操作,提高了正确路径的竞争力,改善了误码性能,且利用提前终止译码进程操作,减少了译码比特数。仿真结果表明,与D-Post-SCLF译码算法和RCS-SCLF译码算法相比,所提出算法具有更低的译码复杂度且在中高信噪比下具有更好的误码性能。  相似文献   

9.
Hybrid in-band on-channel digital audio broadcasting systems deliver digital audio signals in such a way that is backward compatible with existing analog FM transmission. We present a channel error correction and detection system that is well-suited for use with audio source coders, such as the so-called perceptual audio coder (PAC), that have error concealment/mitigation capabilities. Such error mitigation is quite beneficial for high quality audio signals. The proposed system involves an outer cyclic redundancy check (CRC) code that is concatenated with an inner convolutional code. The outer CRC code is used for error detection, providing flags to trigger the error mitigation routines of the audio decoder. The inner convolutional code consists of so-called complementary punctured-pair convolutional codes, which are specifically tailored to combat the unique adjacent channel interference characteristics of the FM band. We introduce a novel decoding method based on the so-called list Viterbi algorithm (LVA). This LVA-based decoding method, which may be viewed as a type of joint or integrated error correction and detection, exploits the concatenated structure of the channel code to provide enhanced decoding performance relative to decoding methods based on the conventional Viterbi algorithm (VA). We also present results of informal listening tests and other simulations on the Gaussian channel. These results include the preferred length of the outer CRC code for 96-kb/s audio coding and demonstrate that LVA-based decoding can significantly reduce the error flag rate relative to conventional VA-based decoding, resulting in dramatically improved decoded audio quality. Finally, we propose a number of methods for screening undetected errors in the audio domain  相似文献   

10.
沈周青  尚俊娜 《电信科学》2018,34(11):77-86
针对极化码的连续消除列表(successive cancellation list,SCL)译码算法的高时延问题,提出了基于对数似然比的多比特SCL(multi-bit SCL,MSCL)译码算法,可以在一个判决时刻同时译出多个码字比特,在不损失译码性能的前提下,将译码时延由3N-2个时钟降为4N/M-2个时钟,相比于现有的多比特SCL译码算法,MSCL译码算法具有更低的路径度量值计算复杂度。为了降低循环冗余校验(cyclical redundancy check, CRC)辅助的SCL(CRC aided SCL,CA-SCL)译码算法的译码时延以及存储空间,提出了分段CRC辅助的MSCL(segmented CRC-aided MSCL,SCA-MSCL)译码算法,并提出了分段信息码字长度修正算法,来保证在信息位索引集A不变的前提下,实现每一分段结尾处对应的信息位索引能够被M整除。SCA-MSCL算法可以借助多个CRC判决来尽可能早地输出译码码字,从而减少译码器的存储空间以及译码时延。  相似文献   

11.
The Viterbi algorithm (VA) is the maximum likelihood decoding algorithm for convolutionally encoded data. Improvements in the performance of a concatenated coding system that uses VA decoding (inner decoder) can be obtained when, in addition to the standard VA output, an indicator of the reliability of the VA decision is delivered to the outer stage of processing. Two different approaches of extending the VA are considered. In the first approach, the VA is extended with a soft output (SOVA) unit that calculates reliability values for each of the decoded output information symbols. In the second approach, coding gains are obtained by delivering a list of the L best estimates of the transmitted data sequence, namely the list Viterbi decoding algorithm (LVA). Our main interest is to evaluate the LVA and the SOVA in comparison with each other, determine suitable applications for both algorithms and to construct extended versions of the LVA and the SOVA with low complexity that perform the task of the other algorithm. We define a list output VA using the output symbol reliability information of the SOVA to generate a list of size L and that also has a lower complexity than the regular LVA for a long list size. We evaluate the list-SOVA in comparison to the LVA. Further, we introduce a low complexity soft symbol output viterbi algorithm that accepts the (short) list output of the LVA and calculates for each of the decoded information bits a reliability value. The complexity and the performance of the soft-LVA (LVA and soft decoding unit) is a function of the list size L. The performance of the soft-LVA and the SOVA are compared in a concatenated coding system. A new software implementation of the iterative serial version of the LVA is also included  相似文献   

12.
在分析LDPC-Turbo级联码的性能、译码复杂性和时延性的基础上,提出了一种可以提高LDPC-Turbo级联码性能的优化设计,即在LDPC编码器和Turbo编码器之间使用交织器。仿真结果表明,改进后LDPC-Turbo码不仅可以提高性能,而且可以有效地减少平均迭代次数和译码时延,尤其是在大信噪比时,效果更好。  相似文献   

13.
A Bidirectional Efficient Algorithm for Searching code Trees (BEAST) is proposed for efficient soft-output decoding of block codes and concatenated block codes. BEAST operates on trees corresponding to the minimal trellis of a block code and finds a list of the most probable codewords. The complexity of the BEAST search is significantly lower than the complexity of trellis-based algorithms, such as the Viterbi algorithm and its list generalizations. The outputs of BEAST, a list of best codewords and their metrics, are used to obtain approximate a posteriori probabilities (APPs) of the transmitted symbols, yielding a soft-input soft-output (SISO) symbol decoder referred to as the BEAST-APP decoder. This decoder is employed as a component decoder in iterative schemes for decoding of product and incomplete product codes. Its performance and convergence behavior are investigated using extrinsic information transfer (EXIT) charts and compared to existing decoding schemes. It is shown that the BEAST-APP decoder achieves performances close to the Bahl–Cocke–Jelinek–Raviv (BCJR) decoder with a substantially lower computational complexity.   相似文献   

14.
刘重阳  郭锐 《电信科学》2022,38(10):79-88
为了提升基于极化码的稀疏码多址接入(sparse code multiple access,SCMA)系统接收机性能,提出了基于简化软消除列表(simplify soft cancellation list,SSCANL)译码器的循环冗余校验(cyclic redundancy check,CRC)辅助联合迭代检测译码接收机方案。该方案中极化码译码器使用SSCANL译码算法,采用译码节点删除技术对软消除列表(soft cancellation list,SCANL)算法所需要的L次软消除译码(soft cancellation, SCAN)进行简化,通过近似删除冻结位节点,简化节点间软信息更新计算过程,从而降低译码算法的计算复杂度。仿真结果表明,SSCANL算法可获得与SCANL算法一致的性能,其计算复杂度与SCANL算法相比有所降低,码率越低,算法复杂度降低效果越好;且基于SSCANL译码器的CRC 辅助联合迭代检测译码接收机方案相较基于SCAN译码器的联合迭代检测译码(joint iterative detection and decoding based on SCAN decoder, JIDD-SCAN)方案、基于SCAN译码器的CRC辅助联合迭代检测译码(CRC aided joint iterative detection and decoding based on SCAN decoder,C-JIDD-SCAN)方案,在误码率为10-4时,性能分别提升了约0.65 dB、0.59 dB。  相似文献   

15.
Majority-logic-like decoding is an outer concatenated code decoding technique using the structure of a binary majority logic code. It is shown that it is easy to adapt such a technique to handle the case where the decoder is given an ordered list of two or more prospective candidates for each inner code symbol. Large reductions in failure probability can be achieved. Simulation results are shown for both block and convolutional codes. Punctured convolutional codes allow a convenient flexibility of rate while retaining high decoding power. For example, a (856, 500) terminated convolutional code with an average of 180 random first-choice symbol errors can correct all the errors in a simple manner about 97% of the time, with the aid of second-choice values. A (856, 500) maximum-distance block code could correct only up to 178 errors based on guaranteed correction capability and would be extremely complex  相似文献   

16.
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.  相似文献   

17.
Soft-decision decoding of a linear block code using the most reliable basis corresponding to each received word is investigated. Based either on probabilistic properties or on the structure of the code considered, three improvements to the algorithm devised by Fossorier and Lin (see ibid., vol.41, no.9, p.1379-1396, 1995) are presented. These modifications allow large computation savings or significant decoding speedup with little error performance degradation. First, a reduced probabilistic list of codeword candidates is associated with order-i reprocessing of a given code. It results in a large reduction of the maximum number of computations with a very small degradation in performance. Then, a probabilistic stopping criterion is introduced for order-0 reprocessing. This new test significantly decreases the average number of computations when appropriately implemented. Finally, the application of the algorithm to coset decoding is considered for |u|u+v| constructed codes. In addition to the conventional coset decoding, a new adaptive practically optimum coset decoding method is presented where at each reprocessing stage, the number of surviving cosets decreases. Suboptimum closest coset decoding is also investigated. It is shown that two-stage decoding with the algorithm of Fossorier and Lin offers a large variety of choices, since the reprocessing order of each stage can be determined independently  相似文献   

18.
In communication systems employing a serially concatenated cyclic redundancy check (CRC) code along with a convolutional code (CC), erroneous packets after CC decoding are usually discarded. The list Viterbi algorithm (LVA) and the iterative Viterbi algorithm (IVA) are two existing approaches capable of recovering erroneously decoded packets. We here employ a soft decoding algorithm for CC decoding, and introduce several schemes to identify error patterns using the posterior information from the CC soft decoding module. The resultant iterative decoding-detecting (IDD) algorithm improves error performance by iteratively updating the extrinsic information based on the CRC parity check matrix. Assuming errors only happen in unreliable bits characterized by small absolute values of the log-likelihood ratio (LLR), we also develop a partial IDD (P-IDD) alternative which exhibits comparable performance to IDD by updating only a subset of unreliable bits. We further derive a soft-decision syndrome decoding (SDSD) algorithm, which identifies error patterns from a set of binary linear equations derived from CRC syndrome equations. Being noniterative, SDSD is able to estimate error patterns directly from the decoder output. The packet error rate (PER) performance of SDSD is analyzed following the union bound approach on pairwise errors. Simulations indicate that both IDD and IVA are better tailored for single parity check (PC) codes than for CRC codes. SDSD outperforms both IDD and LVA with weak CC and strong CRC. Applicable to AWGN and flat fading channels, our algorithms can also be extended to turbo coded systems.  相似文献   

19.
封宏俊  雷菁  李二保 《信号处理》2017,33(5):766-773
系统极化码具有比非系统极化码更好的误码性能,但目前尚无明确的系统译码算法,因此通常采用非系统译码与再编码级联的方式实现系统极化码的译码,但这会带来极大的译码时延。针对这个问题,本文提出了一种基于翻转序列校验罗列连续消除算法的系统译码方案。该方案具有路径自适应的特性,利用回溯更新过程消除了再编码过程,且通过更新校验交替策略极大降低了资源占用。研究表明,与基于AD-SCL的级联译码方案相比,改进方案能降低50%的资源占用与译码延时,且其误码性能稍有提高。   相似文献   

20.
This paper presents a CRC (Cyclic Redundancy Check)-aided turbo equalization approach to reduce the computational complexity. In this approach, CRC code bits are padded to the end of each transmit block, and a cyclic redundancy check is performed after decoding each block at the receiver en.d. If the check sum is zero, which means the receive block is correct, the corresponding LLRs (Log Likelihood Ratios) of this block are set high reliable values, and all the computations corresponding to this block can be cancelled for the subsequent outer iterations. With a lower computational complexity the proposed approach can achieve the same as or even better performance than the conventional non-CRC method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号