首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
罗超  曹阳  彭小峰 《激光与红外》2021,51(5):646-651
针对在可见光通信的闪烁问题,提出了一种通过曼彻斯特编码级联极化码进行调光控制的自适应打孔方法。极化码与曼彻斯特编码级联以便于在可见光系通信统系统中提供50 %的调光,并且通过打孔和插入补偿比特的方式可以实现不等于50 %的调光值。常规打孔方案应将曼彻斯特编码符号的所有比特都打孔,本文提出的方法可以根据选定的极化码的预定冻结比特自适应地打孔,并插入相对应的补偿比特。因此,与常规打孔方法相比,所提出的方法可以对曼彻斯特编码符号的更少比特进行打孔并且需要更少的补偿比特。仿真结果表明,所提出的方法具有比参考方法更好的差错性能性能。  相似文献   

2.
A novel data hiding technique based on modified histogram shifting that incorporates multi-bit secret data hiding is proposed. The proposed technique divides the image pixel values into embeddable and nonembeddable pixel values. Embeddable pixel values are those that are within a specified limit interval surrounding the peak value of an image. The limit interval is calculated from the number of secret bits to be embedded into each embeddable pixel value. The embedded secret bits can be perfectly extracted from the stego image at the receiver side without any overhead bits. From the simulation, it is found that the proposed technique produces a better quality stego image compared to other data hiding techniques, for the same embedding rate. Since the proposed technique only embeds the secret bits in a limited number of pixel values, the change in the visual quality of the stego image is negligible when compared to other data hiding techniques.  相似文献   

3.
一种低功耗双重测试数据压缩方案   总被引:1,自引:0,他引:1       下载免费PDF全文
陈田  易鑫  王伟  刘军  梁华国  任福继 《电子学报》2017,45(6):1382-1388
随着集成电路制造工艺的发展,VLSI(Very Large Scale Integrated)电路测试面临着测试数据量大和测试功耗过高的问题.对此,本文提出一种基于多级压缩的低功耗测试数据压缩方案.该方案先利用输入精简技术对原测试集进行预处理,以减少测试集中的确定位数量,之后再进行第一级压缩,即对测试向量按多扫描划分为子向量并进行相容压缩,压缩后的测试向量可用更短的码字表示;接着再对测试数据进行低功耗填充,先进行捕获功耗填充,使其达到安全阈值以内,然后再对剩余的无关位进行移位功耗填充;最后对填充后的测试数据进行第二级压缩,即改进游程编码压缩.对ISCAS89基准电路的实验结果表明,本文方案能取得比golomb码、FDR码、EFDR码、9C码、BM码等更高的压缩率,同时还能协同优化测试时的捕获功耗和移位功耗.  相似文献   

4.
Differentiation applied to lossless compression of medical images   总被引:1,自引:0,他引:1  
Lossless compression of medical images using a proposed differentiation technique is explored. This scheme is based on computing weighted differences between neighboring pixel values. The performance of the proposed approach, for the lossless compression of magnetic resonance (MR) images and ultrasonic images, is evaluated and compared with the lossless linear predictor and the lossless Joint Photographic Experts Group (JPEG) standard. The residue sequence of these techniques is coded using arithmetic coding. The proposed scheme yields compression measures, in terms of bits per pixel, that are comparable with or lower than those obtained using the linear predictor and the lossless JPEG standard, respectively, with 8-b medical images. The advantages of the differentiation technique presented here over the linear predictor are: 1) the coefficients of the differentiator are known by the encoder and the decoder, which eliminates the need to compute or encode these coefficients, and 21 the computational complexity is greatly reduced. These advantages are particularly attractive in real time processing for compressing and decompressing medical images.  相似文献   

5.
The CCITT has defined Group 3 facsimile apparatus as that which digitally transmits an ISO A4 document over a switched telephone circuit in approximately one minute. Data compression is employed to achieve the reduced transmission time. Study Group XIV of the CCITT has drafted Recommendation T.4 to achieve compatibility between Group 3 facsimile devices. The standard data compression technique specified by T.4 is a one-dimensional coding scheme in which runlengths are encoded using a modified Huffman code (MHC). The recommendation also includes an optional twodimensional compression technique known as the modified READ code (MRC). It is recognized that the switched telephone network is prone to error when transmitting digital data at the standard T.4 data rate of 4800 bits/s. This paper evaluates the error sensitivity of the MHC and MRC when operating over a typical telephone circuit. The error sensitivity analysis is accomplished by means of computer simulation. The error performance of the two coding techniques is analyzed both quantitatively and qualitatively. The quantitative analysis is accomplished using the error sensitivity factor which represents the average number of incorrect pels in the output document caused by a transmission error. The qualitative analysis is based upon viewing actual error-contaminated images generated in the simulation process. Two separate analyses have been performed. First, error sensitivity data for both the MHC and MRC are developed under identical operational conditions, and their relative performance is discussed. In the second part four different techniques for processing the received facsimile signal (MRC), to minimize the subjective effect of transmission errors, are analyzed.  相似文献   

6.
针对极化码译码串行输出造成较大译码时延的问题,该文提出一种基于预译码的最大似然简化连续消除译码算法。首先对译码树节点存储的似然值进行符号提取并分组处理,得到符号向量组;然后比较符号向量组与该节点的某些信息位的取值情况,发现向量组中储存的正负符号分布规律与该节点的中间信息位的取值具有一一对应的关系;在此基础上对组合码中间的1~2 bit进行预译码;最后结合最大似然译码方法估计组合码中的剩余信息位,从而得到最终的译码结果。仿真结果表明:在不影响误码性能的情况下,所提算法与已有的算法相比可有效降低译码时延。  相似文献   

7.
The error pattern correcting code (EPCC) can be constructed to provide a syndrome decoding table targeting the dominant error events of an inter-symbol interference channel at the output of the Viterbi detector. For the size of the syndrome table to be manageable and the list of possible error events to be reasonable in size, the codeword length of EPCC needs to be short enough. However, the rate of such a short length code will be too low for hard drive applications. To accommodate the required large redundancy, it is possible to record only a highly compressed function of the parity bits of EPCC?s tensor product with a symbol correcting code. In this paper, we show that the proposed tensor error-pattern correcting code (T-EPCC) is linear time encodable and also devise a low-complexity soft iterative decoding algorithm for EPCC?s tensor product with q-ary LDPC (T-EPCC-qLDPC). Simulation results show that T-EPCC-qLDPC achieves almost similar performance to single-level qLDPC with a 1/2 KB sector at 50% reduction in decoding complexity. Moreover, 1 KB T-EPCC-qLDPC surpasses the performance of 1/2 KB single-level qLDPC at the same decoder complexity.  相似文献   

8.
While integrated circuits of ever increasing size and complexity necessitate larger test sets for ensuring high test quality, the consequent test time and data volume reflect into elevated test costs. Test data compression solutions have been proposed to address this problem by storing and delivering stimuli in a compressed format. The effectiveness of these techniques, however, strongly relies on the distribution of the specified bits of test vectors. In this paper, we propose a scan cell partitioning technique so as to ensure that specified bits are uniformly distributed across the scan slices, especially for the test vectors with higher density of specified bits. The proposed scan cell partitioning process is driven by an integer linear programming (ILP) formulation, wherein it is also possible to account for the layout and routing constraints. While the proposed technique can be applied to increase the effectiveness of any combinational decompression architecture, in this paper, we present its application in conjunction with a fan-out based decompression architecture. The experimental results also confirm the compression enhancement of the proposed methodology.
Ozgur SinanogluEmail:

Ozgur Sinanoglu   received a B.S. degree in Computer Engineering, and another B.S. degree in Electrical and Electronics Engineering, both from Bogazici University in Turkey in 1999. He earned his M.S. and Ph.D. degrees in the Computer Science and Engineering department of University of California, San Diego, in 2001 and 2004, respectively. Between 2004 and 2006, he worked as a senior design for testability engineer in Qualcomm, located in San Diego, California. Since Fall 2006, he has been a faculty member in the Mathematics and Computer Science Department of Kuwait University. His research field is the design for testability of VLSI circuits.  相似文献   

9.
The main concern of this article is to find linear codes which will correct a set of arbitrary error patterns. Although linear codes which have been designed for correcting random error patterns and burst error patterns can be used, we would like to find codes which will correct a specified set of error patterns with the fewest possible redundant bits. Here, to reduce the complexity involved in finding the code with the smallest redundancy which can correct a specified set of error patterns, algebraic codes whose parity check matrix exhibits a particular structure are considered. If the number of redundant bits is T, the columns of the parity check matrix must be increasing powers of a field element in GF(2T). Given a set of error patterns to be corrected, computations to determine the code rates possible for these type of codes and hence the redundancy for different codeword lengths are presented. Results for various sets of error patterns suggest that the redundancy of these algebraic codes is close to the minimum redundancy possible for the set of error patterns specified and for any codeword length  相似文献   

10.
We propose a lossless compression algorithm for three-dimensional (3-D) binary voxel surfaces, based on the pattern code representation (PCR). In PCR, a voxel surface is represented by a series of pattern codes. The pattern of a voxel v is defined as the 3 /spl times/ 3 /spl times/ 3 array of voxels, centered on v. Therefore, the pattern code for v informs of the local shape of the voxel surface around v. The proposed algorithm can achieve the coding gain, since the patterns of adjacent voxels are highly correlated to each other. The performance of the proposed algorithm is evaluated using various voxel surfaces, which are scan-converted from triangular mesh models. It is shown that the proposed algorithm requires only 0.5/spl sim/1 bits per black voxel (bpbv) to store or transmit the voxel surfaces.  相似文献   

11.
A low memory zerotree coding for arbitrarily shaped objects   总被引:2,自引:0,他引:2  
The set partitioning in hierarchical trees (SPIHT) algorithm is a computationally simple and efficient zerotree coding technique for image compression. However, the high working memory requirement is its main drawback for hardware realization. We present a low memory zerotree coder (LMZC), which requires much less working memory than SPIHT. The LMZC coding algorithm abandons the use of lists, defines a different tree structure, and merges the sorting pass and the refinement pass together. The main techniques of LMZC are the recursive programming and a top-bit scheme (TBS). In TBS, the top bits of transformed coefficients are used to store the coding status of coefficients instead of the lists used in SPIHT. In order to achieve high coding efficiency, shape-adaptive discrete wavelet transforms are used to transformation arbitrarily shaped objects. A compact emplacement of the transformed coefficients is also proposed to further reduce working memory. The LMZC carefully treats "don't care" nodes in the wavelet tree and does not use bits to code such nodes. Comparison of LMZC with SPIHT shows that for coding a 768 /spl times/ 512 color image, LMZC saves at least 5.3 MBytes of memory but only increases a little execution time and reduces minor peak signal-to noise ratio (PSNR) values, thereby making it highly promising for some memory limited applications.  相似文献   

12.
A technique for lossless compression of seismic signals is proposed. The algorithm employed is based on the equation-error structure, which approximates the signal by minimizing the error in the least square sense and estimates the transfer characteristic as a rational function or equivalently, as an autoregressive moving-average process. The algorithm is implemented in the frequency domain. The performance of the proposed technique is compared with the lossless linear predictor and the differentiator approaches for compressing seismic signals. The residual sequence of these schemes is coded using arithmetic coding. The suggested approach yields compression measures (in terms of bits per sample) lower than the lossless linear predictor and the differentiator for compressing different classes of seismic signals  相似文献   

13.
实现20比特/秒/赫兹的无线传输数字调制算法——VMSK/2   总被引:2,自引:0,他引:2  
胡剑凌  徐盛  陈健 《电子学报》2002,30(8):1153-1155
本文介绍了一种新的具有极高频带利用率的数字基带调制算法─1/2甚小频移键控(VMSK/2),VMSK/2可以在不损失信噪比的前提下极大地压缩信号传输所需的频带.将VMSK/2与模拟单边带抑制载波(SSB-SC)调制方式相结合,利用现有的硬件技术,可在射频(RF)传输系统中获得近20比特/秒/赫兹(bits/s/Hz)的频带利用率.本文对VMSK/2结合SSB-SC的调制方式进行了理论分析,并给出了仿真结果.数值实验表明在保证1.0×10-6误码率(BER)的前提下,为实现20bits/s/Hz的频带利用率,VMSK/2结合SSB-SC无线传输系统所需的信噪比(SNR)为13分贝(dB).  相似文献   

14.
A new test-decompression methodology using a variable-rank multiple-polynomial linear feedback shift register (MP-LFSR) is proposed. In the proposed reseeding scheme, a test cube with a large number of specified bits is encoded with a high-rank polynomial, while a test cube with a small number of specified bits is encoded with a low-rank polynomial. Therefore, according to the number of specified bits in each test cube, the size of the encoded data can be optimally reduced. A variable-rank MP-LFSR can be implemented with a slight modification of a conventional MP-LFSR. The experimental results on the largest ISCAS'89 benchmark circuits show that the proposed methodology can provide much better encoding efficiency than the previous methods with adequate hardware overhead.  相似文献   

15.
An efficient multi-rate encoder for IEEE 802.16e LDPC codes which outperforms current single rate encoders with acceptable hardware consumption and effi-cient memory consumption is proposed. This design uti-lizes the common dual-diagonal structure in parity matri-ces to avoid the inverse matrix operation which requires extensive computations. Parallel Matrix-vector multipli-cation (MVM) units, bidirectional operation and storage compression are applied to this multi-rate encoder to in-crease the encoding speed and significantly reduce the quantity of memory bits required. The proposed encoding architecture also contributes to the design of multi-rate encoders whose parity matrices are dual-diagonally struc-tured and have an Approximately lower triangular (ALT) form, such as in IEEE 802.11n and IEEE 802.22. Simu-lation results verified that the proposed encoder can effi-ciently work for all code rates specified in WIMAX stan-dard. With a maximum clock frequency of 117 MHz, the encoder achieves 3 to 10 times higher throughput than prior works. The proposed encoder is capable to switch among six rates by adjusting the input parameter and it achieves the throughput up to 1Gbps.  相似文献   

16.
提出了采用低密度奇偶校验码的分布式联合信源信道网络编码方案,应用于两源一中继一目的节点的无线传感器网络中.在方案中,信源节点通过传输系统信道码的校验位与部分信息位,同时实现了信源压缩与信道纠错.中继节点有效利用数据的相关性进行译码,并进行部分数据比特删余,减少因中继端网络编码引起的错误传播,仿真验证了方案的有效性.应用了不等差错保护思想,更贴近实际应用场景,利于目的节点进行更好的低误差解码.  相似文献   

17.
MIMO-STOBC系统接收性能的研究   总被引:1,自引:0,他引:1  
曾浩  文娟  朱奕奕 《通信技术》2007,40(9):3-4,7
STOBC(Space-time Orthogonal Block Code)是MIMO系统实现空时分集采用的方法之一,Tarok采用正交设计扩展了这种编码结构。这种编码方式接收端完全采用线性处理技术,己被3G用作开环发射分集策略之一。文中以MIMO系统为平台,对STOBC传统的译码算法进行了分析,通过对接收数据的分析比较,充分利用其统计特性和内在关系,提出了基于正交扩频码的一种同码联合检测算法。仿真结果表明在减少运算量的情况下,该算法具有优良的性能。  相似文献   

18.
A discrete approach to multiple tone modulation is developed for digital communication channels with arbitrary intersymbol interference (ISI) and additive Gaussian noise. Multiple tone modulation is achieved through the concatenation of a finite block length modulator based on discrete Fourier transform (DFT) code vectors, and high gain coset or trellis codes. Symbol blocks from an inverse DFT (IDFT) are cyclically extended to generate ISI-free channel-output symbols that decompose the channel into a group of orthogonal and independent parallel subchannels. Asymptotic performance of this system is derived, and examples of asymptotic and finite block length coding gain performance for several channels are evaluated at different values of bits per sample. This discrete multiple tone technique is linear in both the modulation and the demodulation, and is free from the effects of error propagation that often afflict systems employing bandwidth-optimized decision feedback plus coset codes  相似文献   

19.
Second generation image coding techniques, which use information about the human visual system to reach high compression ratios, have proven very successful when applied to single images. These methods can also be applied to image sequences. A directional decomposition based sequence coding technique is presented, in which spatial lowpass and highpass components are analyzed and coded separately. A simple law for sharing the available bits between these components is stated and analytically proved by using a minimum cost/resolution optimality criterion. The detection of directional elements is carried out by using both linear and nonlinear (median) filtering. The coding is based on near optimal estimators which retain only the innovation part of information, and is well suited for differential pulse code modulation. The results of applying this method to a typical sequence are shown. The estimated compression ratio is approximately 320 : 1 (0.025 bits per pixel), allowing a transmission rate of about 41 kbit/second. The resulting image quality is reasonably good.  相似文献   

20.
在传统的Polar码译码的基础上,引入辅助译码比特,构造了一个辅助的Polar码字以提高译码性能。辅助比特由信道选择辅助窗口内的信息位决定。接收端如译码失败,将进行二次译码尝试。译码方案分两阶段进行:基于相同结构的扩展生成矩阵,将辅助译码比特译出;结合译出的辅助比特,对原码字进行译码,提高译码成功率。仿真结果显示,使用所提方法进行译码,其译码性能明显优于普通串行抵消译码方法;与两种传统的自动重传请求方案相比,能分别获得1 dB和1.9 dB的性能增益。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号