共查询到20条相似文献,搜索用时 578 毫秒
1.
2.
文章针对在使用新型商用处理器构建航天器容错计算机系统的过程中,面临着商用处理器的高速要求同存储器检错纠错的低速要求之间的矛盾,提出了一种新的存储器校验方式——滞后校验来解决这个矛盾,并尝试着把这种校验方式和具有多位纠错能力的RS(Reed-Solomon)编码结合起来完成存储器的校验,然后文章探讨了这种校验方式所带来的问题,并提出解决构想。 相似文献
3.
目前在各个领域中用于差错控制的几乎都是循环码CRC(Cyclic Redundancy Check).这是因为它的编译在软硬件上都易于实现,而且它的检错纠错能力强,不但可用于纠正独立的随机错误,也可以用于纠正突发错误.但是,基于CRC对连续二位的差错检错和纠错能力不强的问题,提出了用交错传输的方法,在实际应用中取得很好的检错纠错效果,并在CRC的纠错理论上证明了在基于CRC编码条件下,用交错传输法能提高对连续二位的差错检错与纠错能力. 相似文献
4.
在数据存储和数据通讯领域,为了保证数据的正确,就不得不采用检错的手段。在诸多检错手段中,CRC是最著名的一种。CRC的全称是循环冗余校验,其特点是:检错能力极强,开销小,易于用编码器及检测电路实现。从其检错能力来看,它所不能发现的错误的几率仅为0.0047%以下。从性能上和开销上考虑,均远远优于奇偶校验及算术和校验等方式。因而,在数据存储和数据通讯领域,CRC无处不 相似文献
5.
引言
循环冗余校验(Cyclic Redundancy Check,CRC)是最为常用的计算机和仪表数据通信的校验方法。CRC码是一种线性分组码,编码简单但具有很强的检错纠错能力。除了各种嵌入式仪表、变频器等设备,还有一些数字型传感器的输出数据也提供CRC码,如数字温度传感器DS18820、集成温湿度采集芯片SHT11等。但是,各厂商所提供的CRC校验多项式(用于同通信码模除)互有差别,且有CRC-8和CRC-16之分。另外,规定模除余数初始值所有的位有全清0或全置1之分(其CRC硬件生成电路不同),故其模除求余的运算过程也不相同。初接触者往往难以领晤,省略CRC校验使通信的可靠性降低。 相似文献
6.
符合ISO/IEC标准的快速CRC运算 总被引:2,自引:1,他引:1
李安福 《单片机与嵌入式系统应用》2008,(10):72-73
CRC循环冗余校验是一类重要的线性分组码,编码和解码方法简单,检错和纠错能力强,广泛应用于测控、通信领域,以及计算机文件存储、压缩等方面。 相似文献
7.
8.
9.
逆序CRC编解码算法及在DS18B20中的应用 总被引:1,自引:0,他引:1
循环冗余校验CRC码是检错与纠错能力极强的线性分组码,在通信与测控领域应用广泛.本文提出了逆序CRC信息单元编码算法,即以包含若干位的信息块为单元计算CRC的方法,进行了详细的数学推导,给出了编码算法流程图.分析了CRC的解码算法并给出了解码算法流程图.在讨论了DS18820的CRC程序流程图的基础上,给出了在keil μ Vision8.08a环境下调试通过的KeilC51程序. 相似文献
10.
11.
Chi-Shiang Chan 《Pattern recognition letters》2011,32(14):1679-1690
In 2007, Chan and Chang proposed an image authentication method using the Hamming code technique. The parity check bits were produced from pixels by using the Hamming code technique, and the produced bits were embedded in other pixels. When recovering, the method had to predict the value of the most-significant bit of each tampered pixel first. Then, the tampered pixel was able to be recovered by referring to the predicted bit and its parity check bits. However, using the most-significant bit is unsuitable because of the risk of making an incorrect prediction. In this paper, the parity check bits are produced from pixels whose bits have been rearranged. This way, the value of the most-significant bit of each tampered pixel can be determined according to its parity check bits. The recovery procedure is also modified to accommodate the rearranging procedure. The experimental results show that the proposed method has a better ability to recover the tampered areas, compared with Chan and Chang’s method. Moreover, the quality of the authenticated images of the proposed method is also higher than that of Chan and Chang’s method. 相似文献
12.
文章给出了扩展汉民循环码的一种实现方案,它的特点是在编译码时都把循环码和奇偶校验位分开处理,使得能用比较简单的电路实现纠正1位错同时检测2位错的译码要求,用一个实例说明了编译码电路的具体设计方法。 相似文献
13.
Low density parity check codes (LDPC) exhibit near capacity performance in terms of error correction. Large hardware costs, limited flexibility in terms of code length/code rate and considerable power consumption limit the use of belief-propagation algorithm based LDPC decoders in area and energy sensitive mobile environment. Serial bit flipping algorithms offer a trade-off between resource utilization and error correction performance at the expense of increased number of decoding iterations required for convergence. Parallel weighted bit flipping decoding and its variants aim at reducing the decoding iteration and time by flipping the potential erroneous bits in parallel. However, in most of the existing parallel decoding methods, the flipping threshold requires complex computations.In this paper, Hybrid Weighted Bit Flipping (HWBF) decoding is proposed to allow multiple bit flipping in each decoding iteration. To compute the number of bits that can be flipped in parallel, a criterion for determining the relationship between the erroneous bits in received code word is proposed. Using the proposed relation the proposed scheme can detect and correct a maximum of 3 erreneous hard decision bits in an iteration. The simulation results show that as compared to existing serial bit flipping decoding methods, the number of iterations required for convergence is reduced by 45% and the decoding time is reduced by 40%, by the use of proposed HWBF decoding. As compared to existing parallel bit flipping decoding methods, the proposed HWBF decoding can achieve similar bit error rate (BER) with same number of iterations and lesser computational complexity. Due to reduced number of decoding iterations, less computational complexity and reduced decoding time, the proposed HWBF decoding can be useful in energy sensitive mobile platforms. 相似文献
14.
15.
基于无线传感器网络能量有限的特点,提出了一种判决与校验相结合的算法,利用BCH码和CRC码实现多位纠错功能,在纠正错误时,一位错误和多位错误分开纠错.对纠错算法的能耗进行了分析,并与ARQ方案和BCH纠错方案的能耗进行了对比.仿真结果表明:该多位纠错方法有效地改善了误码率和帧错误率,当误码率大于1.3E-3时,这种算法有较高的能量利用率. 相似文献
16.
ANSHUMAN Chandra 《中国科学F辑(英文版)》2006,49(2):262-272
Testing digital circuit accounts for an increasing part of the cost to design, manufacture and service, electric system―― a trend that is projected to continue and accelerate[1]. Test compression is known as a methodology to reduce the test cost. Test c… 相似文献
17.
18.
19.
20.
Junbin Fang Zoe L. Jiang Kexin Ren Yunhan Luo Zhe Chen Weiping Liu Xuan Wang Xiamu Niu S. M. Yiu Lucas C. K. Hui 《Quantum Information Processing》2014,13(6):1425-1435
Key integrity checking is a necessary process in practical quantum key distribution (QKD) to check whether there is any error bit escaped from the previous error correction procedure. The traditional single-hash method may become a bottleneck in high-speed QKD since it has to discard all the key bits even if just one error bit exists. In this paper, we propose an improved scheme using combinatorial group testing (CGT) based on strong selective family design to verify key integrity in fine granularity and consequently improve the total efficiency of key generation after the error correction procedure. Code shortening technique and parallel computing are also applied to enhance the scheme’s flexibility and to accelerate the computation. Experimental results show that the scheme can identify the rare error bits precisely and thus avoid dropping the great majority of correct bits, while the overhead is reasonable. For a $2^{20}$ -bit key, the disclosed information for public comparison is 800 bits (about 0.076 % of the key bits), reducing 256 bits when compared with the previous CGT scheme. Besides, with an Intel® quad-cores CPU at 3.40 GHz and 8 GB RAM, the computational times are 3.0 and 6.3 ms for hashing and decoding, respectively, which are reasonable in real applications and will not cause significant latency in practical QKD systems. 相似文献