共查询到20条相似文献,搜索用时 171 毫秒
1.
为了提高非规则LDPC码译码的收敛速度,提出了一种具有快速收敛速度的LDPC码构造算法。该算法在原有非规则LDPC码的基础上,通过对校验矩阵进行列重排,来提升信息比特译码的可靠性,以此降低迭代次数,提高收敛速度。仿真实验表明,采用该算法设计的LDPC码,在采用基于变量节点的分层置信度传播(VL-BP)译码算法下,平均迭代次数有明显的降低。另外,对于置信度传播(BP)译码算法和VL-BP译码算法来说,设计的LDPC码具有更优的误码性能。 相似文献
2.
基于稀疏随机矩阵的再生码构造方法 总被引:1,自引:0,他引:1
针对已有的再生码编码方案的运算是基于有限域GF(q)、运算复杂度高、效率低的问题,提出了一种将GF(2)上的稀疏随机矩阵和乘积矩阵框架相结合的再生码构造方法。首先,将文件数据矩阵式排布后根据编码矩阵进行行异或运算;其次,节点失效后,参与帮助节点根据失效节点的编码向量编码本地数据并发送至修复节点;最后,修复节点根据接收到的数据译码出失效节点原有的数据。实验结果表明修复带宽至多只有传统纠删码修复方案的1/10,相比基于传统范德蒙编码矩阵的再生码,编码速率提升了70%,译码恢复速率提升了50%,方便了再生码在大规模存储系统中的应用。 相似文献
3.
4.
《电子技术应用》2017,(11):107-111
低密度奇偶校验(LDPC)码的剩余度置信传播(RBP)和基于校验节点的剩余度置信传播(NWRBP)译码算法是根据剩余度值的有序度量,动态选择最大剩余度值所在的边或校验节点,对其依次进行更新。对比依次同步更新所有校验节点和变量节点的flooding算法,NWRBP算法的收敛速度和译码性能有了很大的提高。基于NWRBP算法,提出一种改进型NWRBP(ENWRBP)算法,即统计NWRBP译码过程中各变量节点的更新次数。如果NWRBP迭代译码失败,则将更新次数最少的变量节点的初始化值设置为0,重新译码。仿真结果表明,与NWRBP相比,ENWRBP译码算法降低了误码率和误帧率。 相似文献
5.
6.
20世纪60年代初,香农的学生Gallager在他的博士毕业论文中首次提出了LDPC码的概念和完整的译码方法,但是直到上世纪末期,随着LDPC码译码理论的进步和计算机技术的发展,LDPC码才以其优良的误码性能成为人们研究的焦点。目前LDPC码正向着高速高增益的方向发展。文中针对目前对高速LDPC码译码技术的迫切需求,以CCSDS标准近地通信(8176,7154)LDPC码为研究对象,在传统高速译码实现方案所用资源不变的前提下,利用该码校验矩阵中1的排列特点,仅通过改变水平运算过程中数据处理的流程,来增大水平运算和垂直运算之间的复用程度,从而达到大大提高译码速度的目的。采用文中给出的方案对CCSDS近地通信标准(8176,7154)LDPC码进行译码,每个水平迭代周期可以减少近10%的运算时间。该方案同样适用于其他的QC-LDPC码。 相似文献
7.
基于IEEE802.16e的LDPO编译码方案设计及实现 总被引:1,自引:1,他引:0
文中提出厂基于IEEE802.16e协议的LDPC码编译码器设计方案。在编码方案中,采用线性复杂度编码,设计了部分译码桶式移位器实现其核心部件矩阵向量乘法器,提高了编码速度,降低了逻辑资源占用量。在译码方案中,针对LOG-BP算法中非线性运算较复杂,译码过程中校验节点史新模块信息量大,消耗资源多的问题,提出用GORDIC算法实现校验节点更新模块,较之传统的LUT方法,节省了大量硬件资源。实验结果表明,本方案在保证LDPC编译码速度和性能的前提下,节约了硬件资源。 相似文献
8.
针对无线信道中数字喷泉码BP译码算法复杂度高、增量译码效率低下的问题,提出了一种基于可译集的增量译码算法。该算法给出变量节点成功译码时似然比所需达到的合适门限值Tre的理论分析方法,将译码过程中似然比高于门限值的变量节点归入可译集,提前译出以减少计算量;另一方面,若译码失败,增加开销重新译码时可先利用已成功译出的部分变量节点简化Tanner图,只对未达到译码门限的变量节点进行迭代,进一步减少计算量,并给出了算法描述和复杂度分析。最后通过仿真表明,该算法与传统的BP译码算法性能相同,但计算量大大减少,效率显著提高。 相似文献
9.
Turbo乘积码(简称TPC码)的传统迭代译码算法寻找竞争码字难、软信息存储量大。针对这些问题提出一种低复杂度的迭代译码算法,并得出新的译码器迭代结构。该算法在Chase迭代SISO译码的基础上,采用无需寻找竞争码字的相关运算来简化软输出信息的计算,同时用前一个迭代译码单元的软输入信息替换传统算法中信道原始接收信息,然后与当前迭代译码单元的软输出信息直接进行线性叠加后作为下一个迭代译码单元的软输入,从而简化了软输入信息的计算和系统存储量。仿真结果验证了该算法的可行性和有效性。 相似文献
10.
文中提出了基于IEEE802.16e协议的LDPC码编译码器设计方案.在编码方案中,采用线性复杂度编码,设计了部分译码桶式移位器实现其核心部件矩阵向量乘法器.提高了编码速度,降低了逻辑资源占用量.在译码方案中,针对LOG-BP算法中非线性运算较复杂,译码过程中校验节点更新模块信息量大,消耗资源多的问题,提出用CORDIC算法实现校验节点更新模块,较之传统的LUT方法,节省了大量硬件资源.实验结果表明,本方案在保证LDPC编译码速度和性能的前提下,节约了硬件资源. 相似文献
11.
Kazuhiro OGATA 《Frontiers of Computer Science》2019,13(1):51
This paper proposes an approach to making liveness model checking problems under fairness feasible. The proposed method divides such a problem into smaller ones that can be conquered. It is not superior to existing tools dedicated to model checking liveness properties under fairness assumptions in terms of model checking performance but has the following positive aspects: 1) the approach can be used to model check liveness properties under anti-fairness assumptions as well as fairness assumptions, 2) the approach can help humans better understand the reason why they need to use fairness and/or anti-fairness assumptions, and 3) the approach makes it possible to use existing linear temporal logic model checkers to model check liveness properties under fairness and/or anti-fairness assumptions. 相似文献
12.
13.
MP3是目前应用最为广泛的音频格式。然而,利用各种音频编辑软件可以很方便地对MP3音频文件进行篡改。通过对相邻压缩次数形成的MP3音频之间量化后的MDCT系数不相同的个数进行统计分析,提出了一种基于相同压缩速率下的MP3双压缩检测方法。该方法有助于MP3音频文件的篡改取证。实验结果表明,该方法具有较好的检测率。 相似文献
14.
本文根据元胞自动机模型划分方法,将二维图像分解为2×2矩阵单元结构.提出了几种逻辑运算式,用以分类由黑白二值点构成的2×2矩阵图形.通过CNN神经网络的多层结构形式,分析了金字塔结构逻辑在相似的组合形式下,对二值图形边缘检测和池化的功能.通过同步脉冲形式能将灰度图像,分解为多个时间维度的二值图形,方便多层金字塔逻辑运算... 相似文献
15.
Collaborative Filtering (CF) computes recommendations by leveraging a historical data set of users’ ratings for items. CF assumes that the users’ recorded ratings can help in predicting their future ratings. This has been validated extensively, but in some domains the user’s ratings can be influenced by contextual conditions, such as the time, or the goal of the item consumption. This type of contextual information is not exploited by standard CF models. This paper introduces and analyzes a novel technique for context-aware CF called Item Splitting. In this approach items experienced in two alternative contextual conditions are “split” into two items. This means that the ratings of a split item, e.g., a place to visit, are assigned (split) to two new fictitious items representing for instance the place in summer and the same place in winter. This split is performed only if there is statistical evidence that under these two contextual conditions the items ratings are different; for instance, a place may be rated higher in summer than in winter. These two new fictitious items are then used, together with the unaffected items, in the rating prediction algorithm. When the system must predict the rating for that “split” item in a particular contextual condition (e.g., in summer), it will consider the new fictitious item representing the original one in that particular contextual condition, and will predict its rating. We evaluated this approach on real world, and semi-synthetic data sets using matrix factorization, and nearest neighbor CF algorithms. We show that Item Splitting can be beneficial and its performance depends on the method used to determine which items to split. We also show that the benefit of the method is determined by the relevance of the contextual factors that are used to split. 相似文献
16.
H.264是最新的国际视频标准。与其它视频编码标准相比,它在编码效率方面有强大的优势:在相同的重建图像质量下,H.264比H.263++和MPEG-4的第2部分节约近50%的码率。但是,H.264中编码效率的提高是以增加巨大的运算量为前提的。本文提出了在模式选择算法中引入零块检测机制,并主动放弃对部分小系数块进行编码的新的模式选择算法。实验结果表明:该算法在保证图像主观质量不下降的前提下,可以大幅度提高H.264高码率下的编码效率,加快中低码率下的视频编码速度。 相似文献
17.
Yi -Dong Shen 《New Generation Computing》1997,15(2):187-203
The Equality check and the Subsumption check are weakly sound, but are not complete even for function-free logic programs.
Although the OverSize (OS) check is complete for positive logic programs, it is too general in the sense that it prunes SLD-derivations
merely based on the depth-bound of repeated predicate symbols and the size of atoms, regardless of the inner structure of
the atoms, so it may make wrong conclusions even for some simple programs. In this paper, we explore complete loop checking
mechanisms for positive logic programs. We develop an extended Variant of Atoms (VA) check that has the following features:
(1) it generalizes the concept of “variant” from “the same up to variable renaming” to “the same up to variable renaming except
possibly with some arguments whose size recursively increases”, (2) it makes use of the depth-bound of repeated variants of
atoms instead of depth-bound of repeated predicate symbols, (3) it combines the Equality/Subsumption check with the VA check,
(4) it is complete w. r. t. the leftmost selection rule for positive logic programs, and (5) it is more sound than both the
OS check and all the existing versions of the VA check.
The research was completed when the author visited the University of Maryland Institute for Advanced Computer Studies.
Yi-Dong Shen, Ph. D: He is a professor of Computer Science at Chongqing University, China. He received the Ph. D degree in computer Science from
Chongqing University in 1991. He was a visiting researcher at the University of Valenciennes, France (1992–1993) and the University
of Maryland Institute for Advanced Computer Studies (UMIACS), U. S. A. (1995–1996), respectively. His present interests include:
Artificial Intelligence, Deductive and Object-Oriented Databases, Logic Programming and Parallel Processing. 相似文献
18.
19.
20.
针对稀疏子空间聚类(SSC)方法聚类误差大的问题,提出了基于随机分块的SSC方法。首先,将原问题数据集随机分成几个子集,构建几个子问题;然后,采用交替方向乘子法(ADMM)分别求得几个子问题的系数矩阵,之后将几个系数矩阵扩充成与原问题一样大小的系数矩阵,并整合成一个系数矩阵;最后,根据整合得到的系数矩阵计算得到一个相似矩阵,并采用谱聚类(SC)算法获得原问题的聚类结果。相较于稀疏子空间聚类(SSC)、随机稀疏子空间聚类(S3COMP-C)、基于正交匹配追踪的稀疏子空间聚类(SSCOMP)、谱聚类(SC)和K均值(K-Means)算法中的最优算法,基于随机分块的SSC方法将子空间聚类误差平均降低了3.12个百分点,且其互信息、兰德指数和熵3个性能指标都明显优于对比算法。实验结果表明基于随机分块的SSC方法能降低子空间聚类误差,改善聚类性能。 相似文献