首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
针对机载MIMO雷达杂波抑制问题,提出了一种降维STAP快速算法——F-JDL(Fast JDL)算法,即首先利用包含感兴趣观测角度形成的过完备基集合将空时数据准确映射到局域处理区域的角度-多普勒单元,之后对于降维后的杂波协方差矩阵通过矩阵分块的方法进行求逆运算,进一步提高处理速度。该方法充分结合JDL降维方法及矩阵求逆快速算法的优点。仿真结果表明,该方法充分利用了MIMO雷达杂波自由度较大的优势,同时大大降低了计算量和样本需求,保证了高效的算法实时性,具有较好的杂波抑制性能。  相似文献   

2.
本文提出了一种基于空时导向矢量作为变换阵来实现降维的局域联合处理(JDL)算法.该算法的变换阵由局域化所选取的几列空域、时域导向矢量作直积得到,导向矢量以任意间隔进行选取.这种JDL算法的降维和局域处理方式简单而易实现,相对于基于DFT的JDL算法,它的实现没有任何限制条件,算法性能大约提高4dB.该算法不仅适用于二维空时自适应处理,同样适用于三维空时自适应处理,仿真证明了该算法的可行性和有效性.  相似文献   

3.
首先分析了波束——多普勒域空时自适应降维原理,并深入研究了两种典型的波束——多普勒域降维算法:局域联合法(JDL)和广义相邻多波束法(GMB),结合两种算法的优点提出了一种稳健的降维STAP算法。该算法适当地增加辅助波束,以较小的运算量为代价,克服了一般降维算法对阵元幅相误差敏感的问题,且主杂波区性能较好,适用于非均匀...  相似文献   

4.
要实现船载高频地波雷达对海面目标的快速准确检测,必须对非均匀分布的海杂波进行有效抑制。为此,提出一种单快拍多重信号分类(MUSIC)与改进无迹变换局域联合处理(UTJDL)相结合的非均匀海杂波抑制算法。首先通过单快拍MUSIC算法,将阵列信号转换至角度多普勒域并进行局域联合处理(JDL),以有效降低运算量和提高目标分辨率。然后,利用改进的基于奇异值分解(SVD)的无迹变换(UT)对每个距离元的JDL数据进行预处理以获得更多的一致性数据并进行每个距离元的协方差矩阵估计。最后,根据不同距离单元与待检测距离单元的相关系数估计待测距离单元的协方差矩阵,并由此实现对海杂波的快速有效抑制。仿真实验结果表明,文中提出的算法可以有效提升信杂噪比,有利于目标的快速准确检测。  相似文献   

5.
陈建文  王永良  陈辉 《电子学报》2001,29(Z1):1932-1935
本文研究了机载相控阵雷达波束-多普勒域部分自适应处理系统的几种降维处理方法,主要包括辅助通道法(ACR)、局域联合处理方法(JDL)和空时相邻多波束法(STMB),并从改善因子性能、自适应权、空时二维频响方面,讨论了几种可能有效的波束位置选择方案的合理性.理论分析与计算机仿真表明,空时相邻多波束法具有较低的系统自由度、优良的性能和很强的误差鲁棒性,是机载雷达非均匀杂波环境下一种可取的波束选取方案.  相似文献   

6.
天发地收高频雷达的双基地特征使其一阶海杂波具有空时耦合性,但在双基地角和电离层扰动的作用下一阶海杂波产生明显的多普勒展宽,会淹没慢速舰船目标。以一阶海杂波空时耦合性为基础,理论推导并验证了其空时分布,利用波束相关性选择空时自适应降维处理算法的局域大小,有效地抑制了展宽海杂波并检测到被淹没的目标。  相似文献   

7.
在机载预警雷达对海洋背景运动目标的探测过程中,雷达平台的高速运动状态使得海杂波多普勒谱发生严重展宽现象,影响目标的检测性能。针对此问题,空-时自适应处理是一种有效的杂波抑制技术,该技术利用杂波的空-时2维耦合特性进行杂波抑制。但相对于陆地杂波而言,海杂波的内部复杂运动特性使得杂波空-时谱发生展宽现象,导致杂波多普勒频率与空间锥角不再保持一一对应关系,从而影响杂波抑制效果。针对海杂波的运动特性,该文提出一种稳健的基于子空间投影的杂波抑制处理算法,所提算法通过滤波凹口自适应展宽技术和先滑窗滤波后自适应处理技术来提高杂波抑制的稳健性。最后通过仿真的海杂波数据和实测海杂波数据验证了所提算法的有效性。  相似文献   

8.
在机载预警雷达对海洋背景运动目标的探测过程中,雷达平台的高速运动状态使得海杂波多普勒谱发生严重展宽现象,影响目标的检测性能.针对此问题,空-时自适应处理是一种有效的杂波抑制技术,该技术利用杂波的空-时2维耦合特性进行杂波抑制.但相对于陆地杂波而言,海杂波的内部复杂运动特性使得杂波空-时谱发生展宽现象,导致杂波多普勒频率与空间锥角不再保持一一对应关系,从而影响杂波抑制效果.针对海杂波的运动特性,该文提出一种稳健的基于子空间投影的杂波抑制处理算法,所提算法通过滤波凹口自适应展宽技术和先滑窗滤波后自适应处理技术来提高杂波抑制的稳健性.最后通过仿真的海杂波数据和实测海杂波数据验证了所提算法的有效性.  相似文献   

9.
基于降维稀疏重构的高效数据域STAP算法研究   总被引:1,自引:0,他引:1       下载免费PDF全文
本文基于信号稀疏重构技术,研究利用待检测样本直接进行动目标检测的高效空时自适应处理(STAP )方案。该方案对时域降维的阵元-多普勒域数据采用空域稀疏重构技术估计高分辨率角度-多普勒谱,进而基于稀疏空时谱研究知识辅助的动目标检测算法。理论分析和仿真实验结果表明:本文算法能有效抑制杂波实现慢动目标检测,且运算量小易于实时并行处理。  相似文献   

10.
ISAR时频分析法成像与传统的成像处理算法相比,将二维的距离-多普勒矩阵变成了三维的时间-距离-多普勒矩阵,在一定程度上改善了成像质量,但对于非平稳运动目标的成像效果并不非常显著.应用Hilbert-Huang变换,对非平稳运动ISAR目标进行成像处理,并与经典的Wigner-Ville分布算法所得图像进行比较.成像试验结果表明:该算法明显改进了图像质量.  相似文献   

11.
Unlike discrete Fourier transform (DFT), warped DFT (WDFT) obtains nonuniformly spaced frequency samples based on an all-pass warping. WDFT finds applications in diverse fields, the most notable being audio processing. An explicit structure for the realization of WDFT and its generalized form, the overcomplete WDFT, is proposed in this work which leads to savings in the computational requirements for both WDFT and inverse WDFT (IWDFT). This structure exploits the symmetry of the $bf Q$ matrix to reduce the operations involved at that stage to about half. Further, the computation of IWDFT is known to be problematic since the matrix is ill-conditioned. In this work, an iterative scheme is proposed to compute this inverse. While the computational error of the iterative inverse is shown to be comparable to the best existing scheme based on the overcomplete WDFT, the iterative inverse does not need the additional transform coeffcients of the overcomplete WDFT.   相似文献   

12.
组合正交基字典稀疏分解通过正交基的级联来构造完备字典,实现稀疏分解。针对稀疏分解的常见算法计算复杂度高的问题,提出一种快速匹配追踪算法。该算法首先求出并存储正交基向量之间的内积,然后根据向量正交基展开系数为其与正交基向量内积的性质将内积运算转化为代数运算,得到一种快速匹配追踪算法。实验结果表明,基于Dirac基和DCT基构成的完备字典对信号leleccum进行稀疏分解时,与匹配追踪(MP)算法相比,该算法的计算速度提高了大约10倍。  相似文献   

13.
The use of sparse representations in signal and image processing is gradually increasing in the past several years. Obtaining an overcomplete dictionary from a set of signals allows us to represent them as a sparse linear combination of dictionary atoms. Pursuit algorithms are then used for signal decomposition. A recent work introduced the K-SVD algorithm, which is a novel method for training overcomplete dictionaries that lead to sparse signal representation. In this work we propose a new method for compressing facial images, based on the K-SVD algorithm. We train K-SVD dictionaries for predefined image patches, and compress each new image according to these dictionaries. The encoding is based on sparse coding of each image patch using the relevant trained dictionary, and the decoding is a simple reconstruction of the patches by linear combination of atoms. An essential pre-process stage for this method is an image alignment procedure, where several facial features are detected and geometrically warped into a canonical spatial location. We present this new method, analyze its results and compare it to several competing compression techniques.  相似文献   

14.
Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimal-sparsity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal.  相似文献   

15.
基于自适应冗余字典的语音信号稀疏表示算法   总被引:3,自引:0,他引:3  
基于冗余字典的信号稀疏表示是一种新的信号表示理论,当前的理论研究主要集中在字典构造算法和稀疏分解算法两方面。该文提出一种新的基于自适应冗余字典的语音信号稀疏表示算法,该算法针对自相关函数为指数衰减的平稳信号,从K-L展开出发,建立了匹配信号结构的冗余字典,进而提出一种高效的基于非线性逼近的信号稀疏表示算法。实验结果表明冗余字典中原子的自适应性和代数结构使短时平稳语音信号稀疏表示具有较高的稀疏度和较好的重构精度,并使稀疏表示算法较好地应用于语音压缩感知理论。  相似文献   

16.
两类混合特征信号的超完备稀疏表示方法   总被引:1,自引:0,他引:1       下载免费PDF全文
孙蒙  王正明 《电子学报》2007,35(7):1327-1332
本文提出了基于跳跃字典的超完备稀疏表示方法和基于自适应分割定义域的超完备稀疏表示方法,分别用于重建带有周期和方波特征的信号和带有周期和冲击特征的信号.实例表明,这两种方法对相应的信号在逼近误差和稀疏性上达到了比直接采用基追踪或小波逼近更好的效果.  相似文献   

17.
We present a nonlinear unmixing approach for extracting the ballistocardiogram (BCG) from EEG recorded in an MR scanner during simultaneous acquisition of functional MRI (fMRI). First, an overcomplete basis is identified in the EEG based on a custom multipath EEG electrode cap. Next, the overcomplete basis is used to infer non-Kirchhoffian latent variables that are not consistent with a conservative electric field. Neural activity is strictly Kirchhoffian while the BCG artifact is not, and the representation can hence be used to remove the artifacts from the data in a way that does not attenuate the neural signals needed for optimal single-trial classification performance. We compare our method to more standard methods for BCG removal, namely independent component analysis and optimal basis sets, by looking at single-trial classification performance for an auditory oddball experiment. We show that our overcomplete representation method for removing BCG artifacts results in better single-trial classification performance compared to the conventional approaches, indicating that the derived neural activity in this representation retains the complex information in the trial-to-trial variability.  相似文献   

18.
毕峰  邱天爽  余南南 《信号处理》2013,29(3):405-409
诱发电位的少次提取对于研究大脑活动规律以及临床诊断等均有重要意义。根据诱发电位与自发脑电信号的不同特点,本文提出一种基于形态分量分析的诱发电位少次提取方法,在不同的过完备字典上对诱发电位与自发脑电信号进行稀疏表示。为了改善在稀疏表示过程中的错误分解问题,提出使用几次带噪观测信号的叠加平均结果作为模板信号,并使用K-SVD算法训练得到合适的过完备字典,再对当前观测信号进行混合稀疏表示。实验结果表明,该方法能够有效地降低由通用过完备字典进行稀疏表示时的错分程度,较好地实现对诱发电位信号的提取。   相似文献   

19.
陈柘  陈海 《国外电子元器件》2014,(2):168-170,173
提出一种基于混合字典的图像稀疏分解去噪方法。使用小波包函数和离散余弦函数构成混合字典,采用匹配追踪算法对图像进行稀疏分解,提取含噪图像中的稀疏成分,最后利用稀疏成分进行图像重构,达到去除图像中噪声的目的。实验中与单一字典稀疏分解去噪算法进行了对比,结果表明,所提出的混合字典稀疏去噪算法可有效提取图像中的稀疏结构,改善重构图像的主客观质量。  相似文献   

20.
The dyadic wavelet transform is an effective tool for processing piecewise smooth signals; however, its poor frequency resolution (its low Q-factor) limits its effectiveness for processing oscillatory signals like speech, EEG, and vibration measurements, etc. This paper develops a more flexible family of wavelet transforms for which the frequency resolution can be varied. The new wavelet transform can attain higher Q-factors (desirable for processing oscillatory signals) or the same low Q-factor of the dyadic wavelet transform. The new wavelet transform is modestly overcomplete and based on rational dilations. Like the dyadic wavelet transform, it is an easily invertible 'constant-Q' discrete transform implemented using iterated filter banks and can likewise be associated with a wavelet frame for L2(R). The wavelet can be made to resemble a Gabor function and can hence have good concentration in the time-frequency plane. The construction of the new wavelet transform depends on the judicious use of both the transform's redundancy and the flexibility allowed by frequency-domain filter design.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号