共查询到20条相似文献,搜索用时 750 毫秒
1.
2.
差分调制解调是解决无线通信高动态条件下信息传输的一种重要方法。针对面向差分调制解调的编码码率既不宜过高,也不宜过低的问题,从信息论的角度,分别针对差分BPSK调制和差分QPSK调制,分析了各种不同码率下无差错传输的香农限,得出了最佳的编码码率。通过采用不同码率的卷积编码和 Turbo编码,实验仿真验证了所得理论的正确性。 相似文献
3.
4.
5.
6.
7.
针对信源与信道编码各自的特点,提出了一种基于SPECK(set partitioned embedded block coder)算法和Turbo码相结合的信源信道联合编码方案.由于图像经SPECK算法编码后的数据流对信道噪声非常敏感,所以可采用此方案来提高图像对信道错误的鲁棒性.该方案通过SPECK算法来产生具有不同容错性的子流,信道编码采用删余Turbo码,用不同码率的信道编码来对这些子流进行非平等保护,以改善数据流抗信道差错的整体性能.这种方案充分利用了信源编码后的数据流的特性,从而使误码率与码长达到了一个较好的平衡.实验结果表明,该方案不仅能够在较高的压缩比下,使解码图像具有较高的峰值信噪比,并且由于Turbo码的优异性能,使得图像在较低信噪比条件下进行传输仍具有较强的鲁棒性. 相似文献
8.
针对H.264的JVT-G012算法在帧层对未编码的P帧采用平均分配比特、忽略了图像复杂度和P帧在图像组(Group of pictures,GOP)中的位置对码率控制影响的问题,提出了一种新的H.264码率控制算法,先利用P帧的图像复杂度和P帧在GOP中位置组成的综合因子来调整P帧的目标比特分配,再利用已编码帧的历史信息来调整当前编码帧的量化参数.通过实验表明,与JVT-G012算法相比,本文算法不仅提高了视频图像的质量,尤其是运动剧烈和纹理复杂的视频序列的图像质量,而且使得输出的实际码率更接近目标码率,提高了码率控制的精准性;与已有算法相比,在保持视频图像质量的情况下,进一步提高了码率控制的精准性. 相似文献
9.
在λ域帧内码控中,提出一种基于卷积神经网络(Convolutional Neural Network,CNN)的帧内码控最佳码率分配算法。首先利用双曲线函数拟合编码树单元(Coding Tree Unit,CTU)的率失真(Rate Distortion, RD)特性。设计双分支卷积神经网络(Dual-Branch Convolutional Neural Network,DBCNN)预测率失真关键参数。然后根据帧级率失真优化(Rate Distortion Optimization,RDO),建立帧级目标码率与CTU码率分配等式关系,推导帧级拉格朗日参数λ。最后反演出最佳CTU码率分配。实验表明,该算法能够显著提高帧内码控编码性能,并具有较高码控精度。 相似文献
10.
11.
为了提高假位置k-匿名位置隐私保护方法中的假位置生成效率和查询服务质量,以及解决假位置生成过程中预处理复杂、没有充分考虑地理语义信息特征等问题,提出一种基于近似匹配的假位置k-匿名位置隐私保护方法.首先,将所选区域划分为若干个正方形网格,并将各位置坐标按所在网格转换为莫顿码;然后,通过对各位置莫顿码之间的近似匹配,选取互不相邻、分布在不同网格的位置点,生成假位置候选集;最后,对候选集中位置点的地名信息进行近似匹配, 得到位置点之间的语义相似度, 并选取语义相似度最小的$k-1$个位置点作为假位置.实验结果表明,所提出的方法在保证假位置之间物理分散性和语义多样化的同时,能够提高假位置生成效率,有效平衡隐私保护效果和查询服务质量. 相似文献
12.
13.
为在回归模型中描述定性属性,通常需要引入哑变量。对含哑变量的回归方程,提出描述不同哑变量在回归方程中不同重要程度的方法。该方法分解出含哑变量的回归方程中哑变量部分和非哑变量部分的回归平方和,计算这两部分在该回归方程中所起作用的占比,将该占比设计为各哑变量在回归方程中的相对重要程度指数。在近10万笔的Lending Club和Prosper网络借贷数据集上,所进行的挖掘借款用途对借款成功率、信用等级对借款利率的影响程度的实验结果表明,与传统回归方程仅提供哑变量前的系数却不能展现其重要程度相比,所提方法展现出不同哑变量的不同重要程度,为定量分析回归方程中定性自变量对因变量的影响程度提供了重要的手段。 相似文献
14.
15.
EZW(嵌入式零树编码)是一种具有良好的压缩效果,嵌入式的编码方式,执行起来非常快速地图像压缩方法,本文对EZW的数据结构及算法作了详细的分析和讨论。 相似文献
16.
《Information & Management》1999,35(5):283-293
The purpose of the research discussed here is to establish a metric for the measurement of reuse in a generic enterprise-level model context and to use this approach to create a specific metric for a company. The paper demonstrates how a software development firm can monitor the reuse success in the development process using the measure. Traditionally, the reuse rate is defined as the percentage of the development effort retrieved as code segments from a software repository. The metric proposed here extends this definition to include reuse of generic enterprise-level model components. An example is given of the successful assessment of a reuse percentage for a software developer's actual project. 相似文献
17.
本文针对采用Gray映射的高阶PSK调制,提出一种比特软信息的简化计算方法.该方法利用Gray码的对称性,通过递推求取比特软信息.分析和仿真结果表明,在高阶调制下,此方法较传统的ML和Max-Log计算方法,运算负担大大降低且对系统性能影响不大.此外,该方法可对不同调制阶数的PSK信号统一处理,适合应用于自适应编码调制系统. 相似文献
18.
In a wireless network, node failure due to either natural disasters or human intervention can cause network partitioning and other communication problems. For this reason, a wireless network should be fault tolerant. At present, most researchers use k-connectivity to measure fault tolerance, which requires the network to be connected after the failure of any up to k-1 nodes. However, wireless network node failures are usually spatially related, and particularly in military applications, nodes from the same limited area can fail together. As a metric of fault-tolerance, k-connectivity fails to capture the spatial relativity of faults and hardly satisfies the fault tolerance requirements of a wireless network design. In this paper, a new metric of fault-tolerance, termed D-region fault tolerance, is introduced to measure wireless network fault tolerance. A D-region fault tolerant network means that even after all the nodes have failed in a circular region with diameter D, it still remains connected. Based on D-region fault tolerance, we propose two fault-tolerant topology control algorithms--the global region fault tolerance algorithm (GRFT) and the localized region fault tolerance algorithm (LRFT). It is theoretically proven that both algorithms are able to generate a network with D-region fault tolerance. Simulation results indicate that with the same fault tolerance capabilities, networks based on both GRFT and LRFT algorithms have a lower transmission radius and lower logical degree. 相似文献
19.
20.
Object-oriented metrics aim to exhibit the quality of source code and give insight to it quantitatively. Each metric assesses the code from a different aspect. There is a relationship between the quality level and the risk level of source code. The objective of this paper is to empirically examine whether or not there are effective threshold values for source code metrics. It is targeted to derive generalized thresholds that can be used in different software systems. The relationship between metric thresholds and fault-proneness was investigated empirically in this study by using ten open-source software systems. Three types of fault-proneness were defined for the software modules: non-fault-prone, more-than-one-fault-prone, and more-than-three-fault-prone. Two independent case studies were carried out to derive two different threshold values. A single set was created by merging ten datasets and was used as training data by the model. The learner model was created using logistic regression and the Bender method. Results revealed that some metrics have threshold effects. Seven metrics gave satisfactory results in the first case study. In the second case study, eleven metrics gave satisfactory results. This study makes contributions primarily for use by software developers and testers. Software developers can see classes or modules that require revising; this, consequently, contributes to an increment in quality for these modules and a decrement in their risk level. Testers can identify modules that need more testing effort and can prioritize modules according to their risk levels. 相似文献