首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 125 毫秒
1.
基于分数阶微分与独立分量算法的红外图像边缘提取   总被引:1,自引:0,他引:1  
探讨了基于分数阶微分增强和独立分量分析在红外图像边缘提取上的应用.首先,从数字图像分数阶微掩模及其运算规则出发对红外图像进行增强.然后,基于信息最大化算法对自然图像进行迭代训练,得到ICA所需的基函数.最后,运用独立分量算法对增强后的红外图像进行边缘提取.实验结果表明,分数阶微分在增强红外图像灰度变化不大的平滑区域中的边缘特征效果明显,而ICA算法做边缘提取即使在有噪声的情况下也能较好地提取红外图像的边缘特征.  相似文献   

2.
通过对独立分量分析(ICA)理论的研究以及对人机交互手势特征的分析,提出了一种基于ICA的静态手势特征提取与识别的方法。用ICA方法分别提取各类静态手势图像的独立分量特征(ICF),构成手势图像的独立基函数空间,对手势图像采用独立分量的最小二乘意义下的表示,结合系统的判别阈值实现对手势的分类识别。系统采用4类手势,共计80幅图像,对方法的有效性进行了检测。实验结果表明,这一方法不仅可行,而且能够获得满意的识别结果。  相似文献   

3.
基于独立分量分析的人脸自动识别方法研究   总被引:4,自引:0,他引:4  
提出了一种独立分量分析(ICA)和遗传算法(GA)相结合的人脸自动识别方法,人脸图像的独立基的获取是采用基于四阶统计信息的ICA算法;为了减少计算复杂度,对原图像进行滤波降维,并用遗传算法对ICA求得的独立基集合进行搜索得到了一个最优的独立基子集;最后,选择合适的分类器根据待识别图像在独立基上投影系数进行分类判决.对人脸图像库的实验表明本方法识别率比基于主元分析的特征脸方法高,且计算量小于传统的基于ICA人脸识别方法.  相似文献   

4.
基于独立分量分析的人脸自动识别方法研究   总被引:14,自引:0,他引:14  
提出了一种独立分量分析(ICA)和遗传算法(GA)相结合的人脸自动识别方法,人脸图像的独立的基的获取是采用基于四阶统计信息的ICA算法;为了减少计算复杂度,对原因像进行滤波降维,并用遗传算法对ICA求得的独立基集合进行搜索得到了一个最优的独立基子集,最后,选择合适的分类器根据待识别图像在独立基上投影系数进行分类判决,对人脸图像库的实验表明本方法识别率比基于主元分析 特征脸方法高,且计算量小于传统的基于ICA人脸识别方法。  相似文献   

5.
针对光场图像显著性检测存在检测目标不完整、边缘模糊的问题,本文提出了一种基于边缘引导的光场图像显著性检测方法。利用边缘增强网络提取全聚焦图像的主体图和边缘增强图,结合主体图和焦堆栈图像所提取的特征获得初始显著图,以提高检测结果的准确性和完整性;将初始显著图和边缘增强图通过特征融合模块进一步学习边缘特性的信息,突出边缘细节信息;最后,使用边界混合损失函数优化以获得边界更为清晰的显著图。实验结果表明,本文所提出的网络在最新的光场图像数据集上,F-measure和MAE分别为0.88和0.046,表现均优于现有的RGB图像、RGB-D图像和光场图像显著性检测算法。所提方法能够更加精确地从复杂场景中检测出完整的显著对象,获得边缘清晰的显著图。  相似文献   

6.
基于 LoG 算子改进的自适应阈值小波去噪算法   总被引:4,自引:2,他引:2  
图像在传输过程中会受到各种噪声干扰,为了实现消除噪声的目的,提出一种基于LoG算子改进的自适应阈值去噪算法。首先,利用LoG算子提取图像的边缘特征信息。接着,根据图像的边缘特征和非边缘特征分别求取改进的阈值函数:对于图像非边缘部分的阈值函数,在软阈值函数的基础上添加一个阈值修正系数,构建新的阈值函数;对于图像边缘部分的阈值函数,将边缘部分小波系数附近的能量和阈值相结合,构建新的阈值函数。最后利用改进的阈值函数对图像R、G、B 3个通道分别处理,保留图像所有的细节信息。实验结果表明,消噪后图像与含噪图像的PSNR值高于传统自适应算法12.09%;MAE值低于传统自适应算法22%。该算法有效保存了图像的边缘信息,综合去噪效果明显提高。  相似文献   

7.
分析了各向异性扩散去噪模型优缺点,针对PM模型不能有效区分噪声和边缘,提出了一种基于核函数的各向异性扩散去噪模型。在该模型中,把图像中噪声与边缘在低维空间的非线性区分关系转变为高维特征空间的线性关系,利用核函数获得高维空间的扩散函数。实验中分别与PM模型、Cattle模型比较分析,证明基于核函数的扩散模型在去除噪声的同时,更好地保留图像的信息,且峰值信噪比最高,去噪性能最优。  相似文献   

8.
一种旋翼叶片微动特征提取新方法   总被引:1,自引:0,他引:1       下载免费PDF全文
利用旋翼目标的旋转运动引起的微多普勒效应,实现目标微动特征的准确提取,可为目标的精确识别提供重要依据。该文以旋翼叶片为例,提出一种基于自相关函数和图像域的微动特征快速提取方法。首先基于目标的回波信号的周期性,利用回波自相关函数的图像峰值位置与目标旋转频率的关系,实现旋转频率的快速提取。然后在图像域提取目标微多普勒特征的边缘信息,通过边缘信息获得叶片散射点位置和初相信息。仿真结果验证了该方法的有效性。  相似文献   

9.
多尺度遥感图像的非本质特征量较大,不仅易导致图像噪声较大,也增加了图像分割的难度。为充分保留分割后多尺度遥感图像的边缘特征,在U-net卷积神经网络下提出新的图像分割算法。以U-net卷积神经网络为基网,提取被分割图像特征,获得被分割图像细节信息;计算相邻像素和原始像素特征向量的欧氏距离,结合去噪算法,通过归一化参数处理,建立相似性函数,实现对多尺度遥感图像分割特征增强处理;计算分割框候选偏差值;根据U-net卷积神经网络结构确定局部最优合并区域对;计算度量区域的距离,使用全局最优区域合并方法更新分割时间复杂度,实现多尺度遥感图像整体分割。由实验结果可知,该算法能够精准确定指定建筑物位置,并保留建筑物完整边缘细节信息。  相似文献   

10.
彩色伪随机编码结构光解码方法研究   总被引:2,自引:1,他引:1  
在采用伪随机序列生成彩色条纹作为投影图像的基 础上提出了一种对彩色伪随化编码结构光三步 解码新方法。首先,建立向量梯度算子将Canny算子推广到彩色多通道图像,对彩色带状图 像进行彩色边缘检测,确 定边缘特征的位置信息;其次,采用引导式K-means聚类算法和 颜色分辨能力较强的颜色不变量对图像进行颜色识 别和分类,继而获得边缘特征的颜色信息;最后,提出分步窗口匹配算法对边缘特征序列进 行匹配,从而确立拍摄 图像和投射图像上的边缘特征的对应性。实验结果表明,提出的解码方法在不需要假定被测 物体表面颜色和纹理结 构的条件下,依然能够有效地提高解码的精度,具有较强的鲁棒性。  相似文献   

11.
In this paper, a robust edge detection method based on independent component analysis (ICA) was proposed. It is known that most of the ICA basis functions extracted from images are sparse and similar to localized and oriented receptive fields. In this paper, the L p norm is used to estimate sparseness of the ICA basis functions, and then, the sparser basis functions were selected for representing the edge information of an image. In the proposed method, a test image is first transformed by ICA basis functions, and then, the high-frequency information can be extracted with the components of the selected sparse basis functions. Furthermore, by applying a shrinkage algorithm to filter out the components of noise in the ICA domain, we can readily obtain the sparse components of the noise-free image, resulting in a kind of robust edge detection even for a noisy image with a very low SN ratio. The efficiency of the proposed method for edge detection is demonstrated by experiments with some medical images.  相似文献   

12.
宗欣  谢宏  董耀华 《信息技术》2007,31(8):8-10,83
在从多幅混合图像分离出原始图像信号的过程中,当原始图像信号之间不满足统计独立条件时,采用一般的独立分量分析方法将无法分离出正确的原始图像。针对这一缺陷,结合图像信号特点提出了一种基于图像边缘信息的独立分量分析方法。实验证明,这种方法能在一定程度上提高此类图像的分离效果,同时能有效地克服高斯白噪声的影响。  相似文献   

13.
An unsupervised classification algorithm is derived by modeling observed data as a mixture of several mutually exclusive classes that are each described by linear combinations of independent, non-Gaussian densities. The algorithm estimates the data density in each class by using parametric nonlinear functions that fit to the non-Gaussian structure of the data. This improves classification accuracy compared with standard Gaussian mixture models. When applied to images, the algorithm can learn efficient codes (basis functions) for images that capture the statistically significant structure intrinsic in the images. We apply this technique to the problem of unsupervised classification, segmentation, and denoising of images. We demonstrate that this method was effective in classifying complex image textures such as natural scenes and text. It was also useful for denoising and filling in missing pixels in images with complex structures. The advantage of this model is that image codes can be learned with increasing numbers of classes thus providing greater flexibility in modeling structure and in finding more image features than in either Gaussian mixture models or standard independent component analysis (ICA) algorithms.  相似文献   

14.
Comparison of ICA approaches for facial expression recognition   总被引:1,自引:0,他引:1  
Independent component analysis (ICA) and Gabor wavelets extract the most discriminating features for facial action unit classification by employing either a cosine similarity measure (CSM) classifier or support vector machines (SVMs). So far, only the ICA approach, which is based on the InfoMax principle, has been tested for facial expression recognition. In this paper, in addition to the InfoMax approach, another five ICA approaches extract features from two facial expression databases. In particular, the Extended InfoMax ICA, the undercomplete ICA, and the nonlinear kernel-ICA approaches are exploited for facial expression representation for the first time. When applied to images, ICA treats the images as being mixtures of independent sources and decomposes them into an independent basis and the corresponding mixture coefficients. Two architectures for representing the images can be employed yielding either independent and sparse basis images or independent and sparse distributions of image representation coefficients. After feature extraction, facial expression classification is performed with the help of either a CSM classifier or an SVM classifier. A detailed comparative study is made with respect to the accuracy offered by each classifier. The correlation between the accuracy and the mutual information of independent components or the kurtosis is evaluated. Statistically significant correlations between the aforementioned quantities are identified. Several issues are addressed in the paper: (i) whether features having super- and sub-Gaussian distribution facilitate facial expression classification; (ii) whether a nonlinear mixture of independent sources improves the classification accuracy; and (iii) whether an increased “amount” of sparseness yields more accurate facial expression recognition. In addition, performance enhancements by employing leave-one-set of expressions-out and subspace selection are studied. Statistically significant differences in accuracy between classifiers using several feature extraction methods are also indicated.  相似文献   

15.
Due to their abilities to succinctly capture features at different scales and directions, wavelet-based decomposition or representation methods have found wide use in image analysis, restoration, and compression. While there has been a drive to increase the representation ability of these methods via directional filters or elongated basis functions, they still have been focused on essentially piecewise linear representation of curves in images. We propose to extend the line-based dictionary of the beamlet framework to one that includes sets of arcs that are quantized in height. The proposed chordlet dictionary has elements that are constrained at their endpoints and limited in curvature by system rate or distortion constraints. This provides a more visually natural representation of curves in images and, furthermore, it is shown that for a class of images the chordlet representation is more efficient than the beamlet representation under tight distortion constraints. A data structure, the fat quadtree and an algorithm for determining an optimal chordlet representation of an image are proposed. Codecs have been implemented to illustrate applications to both lossy and lossless low bitrate compressions of binary edge images, and better rate or rate–distortion performance over the JBIG2 standard and a beamlet-based compression method are demonstrated.  相似文献   

16.
This paper addresses the use of independent component analysis (ICA) for image compression. Our goal is to study the adequacy (for lossy transform compression) of bases learned from data using ICA. Since these bases are, in general, non-orthogonal, two methods are considered to obtain image representations: matching pursuit type algorithms and orthogonalization of the ICA bases followed by standard orthogonal projection.Several coder architectures are evaluated and compared, using both the usual SNR and a perceptual quality measure called picture quality scale. We consider four classes of images (natural, faces, fingerprints, and synthetic) to study the generalization and adaptation abilities of the data-dependent ICA bases. In this study, we have observed that: bases learned from natural images generalize well to other classes of images; bases learned from the other specific classes show good specialization. For example, for fingerprint images, our coders perform close to the special-purpose WSQ coder developed by the FBI. For some classes, the visual quality of the images obtained with our coders is similar to that obtained with JPEG2000, which is currently the state-of-the-art coder and much more sophisticated than a simple transform coder.We conclude that ICA provides a excellent tool for learning a coder for a specific image class, which can even be done using a single image from that class. This is an alternative to hand tailoring a coder for a given class (as was done, for example, in the WSQ for fingerprint images). Another conclusion is that a coder learned from natural images acts like an universal coder, that is, generalizes very well for a wide range of image classes.  相似文献   

17.
Independent component analysis (ICA) has found great promise in magnetic resonance (MR) image analysis. Unfortunately, two key issues have been overlooked and not investigated. One is the lack of MR images to be used to unmix signal sources of interest. Another is the use of random initial projection vectors by ICA, which causes inconsistent results. In order to address the first issue, this paper introduces a band-expansion process (BEP) to generate an additional new set of images from the original MR images via nonlinear functions. These newly generated images are then combined with the original MR images to provide sufficient MR images for ICA analysis. In order to resolve the second issue, a prioritized ICA (PICA) is designed to rank the ICA-generated independent components (ICs) so that MR brain tissue substances can be unmixed and separated by different ICs in a prioritized order. Finally, BEP and PICA are combined to further develop a new ICA-based approach, referred to as PICA-BEP to perform MR image analysis.  相似文献   

18.
Low-rate and flexible image coding with redundant representations.   总被引:7,自引:0,他引:7  
New breakthroughs in image coding possibly lie in signal decomposition through nonseparable basis functions that can efficiently capture edge characteristics, present in natural images. The work proposed in this paper provides an adaptive way of representing images as a sum of two-dimensional features. It presents a low bit-rate image coding method based on a matching pursuit (MP) expansion, over a dictionary built on anisotropic refinement and rotation of contour-like atoms. This method is shown to provide, at low bit rates, results comparable to the state of the art in image compression, represented here by JPEG2000 and SPIHT, with generally a better visual quality in the MP scheme. The coding artifacts are less annoying than the ringing introduced by wavelets at very low bit rate, due to the smoothing performed by the basis functions used in the MP algorithm. In addition to good compression performances at low bit rates, the new coder has the advantage of producing highly flexible streams. They can easily be decoded at any spatial resolution, different from the original image, and the bitstream can be truncated at any point to match diverse bandwidth requirements. The spatial adaptivity is shown to be more flexible and less complex than transcoding operations generally applied to state of the art codec bitstreams. Due to both its ability for capturing the most important parts of multidimensional signals, and a flexible stream structure, the image coder proposed in this paper represents an interesting solution for low to medium rate image coding in visual communication applications.  相似文献   

19.
SAR图像线条特征提取方法研究   总被引:7,自引:2,他引:5  
雷达图像的线条特征提取算法一般分为三步,首先作图像预处理,然后采用特定的边缘检测算子提取边缘点,第三步形成各种有意义的线条特征,并且将断裂的长线连起来。常见的提取算法都是基于光学图像的,所以在确定边缘点时,假定图像中的噪声是加性高斯白噪声。这样,在合成孔径雷达(SAR)图像中使用光学图像中提取边缘点的方法就不行了,这是因为SAR图像中的相干斑噪声是服从K分布的。不过,将边缘点连接为有意义的线条特征的方法还可以沿用。所以,我们可以采用两步检测算子来检测边缘点,然后使用从相位编组思想演化而来的方向编组法形成直线条特征方法。大量的实验验证了这种方法对于SAR图像是切实可行的。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号