首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
本文提出了一种分布式的立体图像编码方法,传统的立体图像编码方法是基于视差补偿的,这种编码方法的编码复杂度高,并且需要在编码端传输图像数据,对编码器的存储空间和计算能力有较高的要求。这种方法虽然能够取得很好的编码效率,但在有些情况下是不适合的,如编码器的计算能力或存储能力有限时,或者编码器间不能通信或者通信代价较大的场合。根据Wyner-Z iv理论,两个互相关的信源在独立编码联合解码的情况下能取得和联合编码联合解码同样的率失真,本文提出一种分布式编码方法可以使得在编码端对两幅图像分别独立编码,在解码端联合解码。降低了编码端的复杂度,同时仍然能够很好的去除立体像对之间的信息冗余,取得好的编码效率。  相似文献   

2.
分布式视频编码器中Turbo码纠错所需要的校验位数量直接决定整个编码器的率失真(RD,Rate-Distortion)性能.分析了传统算法的次优化问题,提出一种新的基于比特概率优化的信道似然值计算算法,该算法首先将原始像素进行比特面分解,然后分别对每个比特面进行独立Turbo码编码及联合解码.解码端在已知已解码比特面的条件下,计算当前解码比特面更准确的信道似然值作为Turbo码的解码输入,减少解码所需要传输的校验位数量,提高编码器RD性能.实验结果表明所提算法能明显提升编码系统的RD性能.  相似文献   

3.
一种基于运动补偿三维小波的 多描述视频编码方法   总被引:2,自引:0,他引:2  
卓力  王仕宝  王素玉  张菁 《电子学报》2009,37(10):2154-2159
 本文将多描述编码与运动补偿三维小波可扩展视频编码相结合,提出了一种基于运动补偿三维小波的多描述视频编码方法.该方法首先根据编码序列的运动特性,自适应地进行每个描述的码率分配,以控制各个描述中的冗余,然后将编码序列的关键信息-运动矢量和低频帧码流复制到两个描述中,并将高频帧码流分配到不同的描述中.在解码端根据正确接收信息的不同,采用不同的方法进行视频重建.实验结果表明,与单描述编码方法相比,在信道丢包率较高的情况下,本文方法可以提供更好的传输鲁棒性.  相似文献   

4.
一种H.264视频流自适应率失真优化编码算法   总被引:1,自引:0,他引:1  
为了提高编码视频流在丢包网络环境中的抗误码性能,目前比较常用的是采用帧内刷新算法.在率失真框架之内的帧内优化编码刷新算法,则被认为是更为直接和有效的解决办法.在视频编码标准H.264/JVT中采用的就是这种算法.然而由于没有考虑到信道丢包率对编码器端进行仿真解码次数的影响,从而导致在进行率失真优化编码时的计算量较大,编码耗时较长,严重影响了编码器的编码效率.基于以上分析,提出了一种改进的自适应率失真优化编码算法.将H.264标准率失真优化编码算法中计算解码器端视频帧期望失真度的代数平均值算法,改进为加权平均值算法.仿真实验表明,提出的算法可根据信道丢包率和模拟信道状态个数信息来自适应地决定编码器端进行仿真解码计算的次数,从而有效降低H.264标准率失真优化编码算法中的计算冗余和计算复杂度,节省编码耗时.在模拟信道状态个数默认为30个时,本算法最多可节省近55%的编码耗时.  相似文献   

5.
基于Compressed Sensing框架的图像多描述编码方法   总被引:6,自引:1,他引:5       下载免费PDF全文
基于新兴的压缩感知(Compressed Sensing,CS)理论,提出了一种抗丢包能力强且结构简单易实现的多描述编码方法.首先对变换后的图像进行交织抽取分块,再对各子块进行随机观测、量化、打包形成多个描述子码流.解码端根据接收码流情况通过求解优化问题重建原图像.由于随机观测过程简单易实现,故该方法可以以较低的计算复杂度构造出较多的描述子.实验结果表明,在同样的丢包率下,本文方法的重构质量(PSNR)明显优于SPIHT多描述编码方法,且计算复杂度较低.  相似文献   

6.
新的语音信号统一VBR编码方法   总被引:5,自引:0,他引:5       下载免费PDF全文
杨震  郑宝玉 《电子学报》2002,30(1):49-53
本文提出一种两级语音信号编译码新方法-EMSVBR系统,输入信号经语音活动性检测后,经两级编码器进行压缩.其核心编码器基于混合编码技术,增强编码器基于小波分带的SBC技术,系统的码流是分层嵌入式的,系统码率变化既利用了语音的突发性,又可根据网络容量或信道特性变化而变化,涵盖了目前几乎所有语音编码标准的码率,并且新系统的解码语音质量,高于同样码率下的单一编码标准的质量.这种语音VBR编译码方法,尤其适合于未来IP和ATM网络中的语音通信.  相似文献   

7.
在进行基于H.264标准的视频通信系统设计时,为提高编码视频流的抗误码性能以及编码器的编码效率,提出了一种信道自适应非平等误码保护(UEP)技术。该技术可根据反馈的信道状态信息自适应地确定H.264编码视频流在网络适配层(NAL)中的工作模式,在丢包信道中结合人眼的视觉特性重新对编码视频信息进行分区并分别采用不同优先级的误码保护。仿真实验表明,在有效提高H.264视频通信系统中解码器输出重构视频图像视觉质量的同时,进一步降低编码器端数据包的打包开销,从而提高编码器的编码效率。  相似文献   

8.
本文针对互联网和无线信道等不可靠网络的视频传输问题,提出一种基于H.264和双树小波变换的多描述视频编码解决方案.采用分层的多描述视频编码框架,实现H.264和双树小波编码的有机结合.基本层用H.264编码器对视频信号进行低码率编码后,复制到各个描述;增强层用三维双树小波变换对原视频和基本层重建视频的差值进行编码,将产生的四棵三维小波树经噪声整形后两两组合,编码送到不同描述.在解码端,若能够接收到两个描述,则通过中心解码器实现高质量的视频重建;若丢失一个描述,则通过边解码器解码仍可保证一定质量的视频重建.实验结果表明在相同码率下,本算法的视频中心解码和边解码质量优于现有的多描述视频编码算法.  相似文献   

9.
《信息技术》2016,(8):200-203
针对数据采集系统中语音传输受传输距离、成本等方面限制的问题,设计了一种低成本的语音信号压缩实时通信系统。采用模块化的方法进行硬件设计,通过将精简的TCP/IP协议移植到以太网控制芯片中来实现局域网通信,在DSP上优化实现了G.723.1语音编码算法,编码后的语音数据通过socket传送到PC端,PC端接收解码并实时播放。实验结果表明在语音质量略有退化的情况下,算法复杂度降低了,系统实现了语音信号远距离的实时传输。  相似文献   

10.
贾懋珅  鲍长春 《电子学报》2009,37(10):2291-2297
 基于国际电信联盟标准化组织(ITU-T)编码标准G.729.1,本文提出了一种嵌入式变速率立体声语音与音频编码方法.本算法利用G.729.1和改进的调制叠接变换(Modulated Lapped Transform,MLT)编码技术对输入信号的中值与边带信息进行分层编码,形成具有嵌入式结构的码流.编码器可处理宽带和超宽带的立体声信号,宽带立体声信号编码的最大码率为48kb/s,超宽带立体声信号编码的最大速率为64kb/s.实现结果表明,本编码器的编码质量均达到了ITU-T对G.EV-VBR立体声编码的指标要求.  相似文献   

11.
基于FPGA的LDPC码编译码器联合设计   总被引:1,自引:0,他引:1  
该文通过对低密度校验(LDPC)码的编译码过程进行分析,提出了一种基于FPGA的LDPC码编译码器联合设计方法,该方法使编码器和译码器共用同一校验计算电路和复用相同的RAM存储块,有效减少了硬件资源的消耗量。该方法适合于采用校验矩阵进行编码和译码的情况,不仅适用于全并行的编译码器结构,同时也适用于目前广泛采用的部分并行结构,且能够使用和积、最小和等多种译码算法。采用该方法对两组不同的LDPC码进行部分并行结构的编译码器联合设计,在Xilinx XC4VLX80 FPGA上的实现结果表明,设计得到的编码器和译码器可并行工作,且仅占用略多于单个译码器的硬件资源,提出的设计方法能够在不降低吞吐量的同时有效减少系统对硬件资源的需求。  相似文献   

12.
时文华  张雄伟  邹霞  孙蒙 《信号处理》2019,35(4):631-640
针对传统的神经网络未能对时频域的相关性充分利用的问题,提出了一种利用深度全卷积编解码神经网络的单通道语音增强方法。在编码端,通过卷积层的卷积操作对带噪语音的时频表示逐级提取特征,在得到目标语音高级特征表示的同时逐层抑制背景噪声。解码端和编码端在结构上对称,在解码端,对编码端获得的高级特征表示进行反卷积、上采样操作,逐层恢复目标语音。跳跃连接可以很好地解决极深网络中训练时存在的梯度弥散问题,本文在编解码端的对应层之间引入跳跃连接,将编码端特征图信息传递到对应的解码端,有利于更好地恢复目标语音的细节特征。 对特征融合和特征拼接两种跳跃连接方式、基于L1和 L2两种训练损失函数对语音增强性能的影响进行了研究,通过实验验证所提方法的有效性。   相似文献   

13.
Distributed Joint Source-Channel Coding of Video Using Raptor Codes   总被引:1,自引:0,他引:1  
Extending recent works on distributed source coding, this paper considers distributed source-channel coding and targets at the important application of scalable video transmission over wireless networks. The idea is to use a single channel code for both video compression (via Slepian-Wolf coding) and packet loss protection. First, we provide a theoretical code design framework for distributed joint source-channel coding over erasure channels and then apply it to the targeted video application. The resulting video coder is based on a cross-layer design where video compression and protection are performed jointly. We choose Raptor codes - the best approximation to a digital fountain - and address in detail both encoder and decoder designs. Using the received packets together with a correlated video available at the decoder as side information, we devise a new iterative soft-decision decoder for joint Raptor decoding. Simulation results show that, compared to one separate design using Slepian-Wolf compression plus erasure protection and another based on FGS coding plus erasure protection, the proposed joint design provides better video quality at the same number of transmitted packets. Our work represents the first in capitalizing the latest in distributed source coding and near-capacity channel coding for robust video transmission over erasure channels.  相似文献   

14.
分析了循环码的特性,提出一种循环汉明码编译码器的设计方案。编译码器中编码采用除法电路,译码采用梅吉特译码器,易于工程应用。对编译码器在FPGA上进行了实现,通过参数化设置,具有较高的码率,适用于(255,247)及其任意缩短码的循环汉明码,并给出了译码器的仿真和测试结果。结果表明:编译码器运行速率高、译码时延小,在Virtex-5芯片上,最高工作时钟频率大于270 MHz。在码组错误个数确定的系统应用中,可以有效降低误码率,一般可将误码率降低一个量级。实践表明,该设计具有很强的工程实用价值。  相似文献   

15.
Real world source coding algorithms usually leave a certain amount of redundancy within the coded bit stream. Shannon (1948) already mentioned that this redundancy can be exploited at the receiver side to achieve a higher robustness against channel errors. We show how joint source-channel decoding can be performed in a way that is applicable to any mobile communication system standard. Considerable gains in terms of bit error rate or signal-to-noise ratio (SNR) are possible dependent on the amount of redundancy. However, an even better performance can be achieved by changing also the transmitter sided source and channel encoders. We propose an encoding concept employing low-dimensional quantization. Keeping the gross bit rate as well as the clean channel quality the same, it decreases the complexity of the source encoder and the decoder significantly. Finally, we give an application of our methods to spectral coefficient coding in speech transmission over a Rayleigh fading channel resulting in channel SNR gains of about 2 dB as compared to state-of-the-art (de-)coding and bad frame handling methods  相似文献   

16.
Error control coding is a key element of any digital wireless communication system, minimizing the effects of noise and interference on the transmitted signal at the physical layer. In 3G mobile cellular wireless systems, error control coding must accommodate both voice and data users, whose requirements vary considerably in terms of latency, throughput, and the impact of errors on the user application. At the base station, dedicated hardware or readily reconfigurable components are needed to address the concurrent coding and decoding demands of a large number of users with different call parameters. In contrast, the encoder and decoder at the user equipment (UE) are dedicated to a single call setup which changes infrequently. In designing encoder and decoder solutions for 3G wireless systems, not only are the performance issues important, but also the costs. Cellular wireless infrastructure manufacturers need to reduce costs, maximize system reuse, and increase flexibility in order to compete in the market. Furthermore, future-proofing a network is a primary concern due to the high cost of deployment. For the UE, power consumption (battery life) and size are key constraints in addition to manufacturing costs. This article considers the 3G decoder design problem and, using case studies, describes two 3G decoder solutions using ASICs. The first device is targeted for base station deployment and is based on a unified architecture for convolutional and turbo decoding. The second device is a dedicated high-speed radix-4 logMAP turbo decoder targeted for UE, motivated by the requirements for high-speed downlink packet access. Both devices have been fabricated in 0.18 /spl mu/m CMOS technology, and while optimized for either base station or UE, may be used in both applications.  相似文献   

17.
通信系统中卷积码编解码器的VHDL实现   总被引:3,自引:1,他引:2  
韩学超  韩新春 《通信技术》2009,42(10):72-74
卷积码作为通信系统中重要的编码方式,以其良好的编码性能,合理的译码方法,被广泛应用。在阐述卷积码编解码器基本工作原理的基础上,给出了(3,1,2)卷积编码器和(2,1,1)卷积解码器的VHDL设计,在QuartusII环境下进行了波形仿真,并下载到EPF10K10LC84-3上进行了验证,其结果表明了该编解码器的正确性和合理性。  相似文献   

18.
A class of nonlinear block codes has been discovered in which encoding and decoding both use the same logical Hadamard transform. The virtual identity of encoder and decoder is in marked contrast with conventional coding methods.  相似文献   

19.
In this paper, we consider the problem of lossy coding of correlated vector sources with uncoded side information available at the decoder. In particular, we consider lossy coding of vector source xisinRN which is correlated with vector source yisinRN, known at the decoder. We propose two compression schemes, namely, distributed adaptive compression (DAC) and distributed universal compression (DUC) schemes. The DAC algorithm is inspired by the optimal solution for Gaussian sources and requires computation of the conditional Karhunen-Loegraveve transform (CKLT) of the data at the encoder. The DUC algorithm, however, does not require knowledge of the CKLT at the encoder. The DUC algorithms are based on the approximation of the correlation model between the sources y and x through a linear model y=Hx+n in which H is a matrix and n is a random vector and independent of x. This model can be viewed as a fictitious communication channel with input x and output y. Utilizing channel equalization at the receiver, we convert the original vector source coding problem into a set of manageable scalar source coding problems. Furthermore, inspired by bit loading strategies employed in wireless communication systems, we propose for both compression schemes a rate allocation policy which minimizes the decoding error rate under a total rate constraint. Equalization and bit loading are paired with a quantization scheme for each vector source entry (a slightly simplified version of the so called DISCUS scheme). The merits of our work are as follows: 1) it provides a simple, yet optimized, implementation of Wyner-Ziv quantizers for correlated vector sources, by using the insight gained in the design of communication systems; 2) it provides encoding schemes that, with or without the knowledge of the correlation model at the encoder, enjoy distributed compression gains  相似文献   

20.
The key challenges in real time voice communication in long term evolution mobile are reduction in complexity and latency. Efficient encoding and decoding algorithms can cater to these. The implementation of such polar code based efficient algorithms is proposed in this paper. The overall latency of 3.8 ms is needed to process 8 bit block length. The novel sub-matrix near to identity matrix is presented. This resulted into minimization of loops among least reliable bits due to iterated parity check matrix. Look-up table based memory mapping is used in encoder to reduce latency while Euclidian decoding technique is used in decoder. The number of iterations is reduced by 50%. The experimentation is performed with additive white Gaussian noise and QPSK modulation. The proposed modified iterative decoding algorithm requires SNR of 5.5 dB and 192 computations for targeted bit error rate of 10?4. The second proposed method needs 9 dB, 2 iterations for 384 computations. The penalty paid is quantization error of 0.63% due to restricting computations to fourth order series of hyperbolic function with same 8 bit block length.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号