首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An unsupervised competitive neural network for efficient clustering of Gaussian probability density function (GPDF) data of continuous density hidden Markov models (CDHMMs) is proposed in this paper. The proposed unsupervised competitive neural network, called the divergence-based centroid neural network (DCNN), employs the divergence measure as its distance measure and utilizes the statistical characteristics of observation densities in the HMM for speech recognition problems. While the conventional clustering algorithms used for the vector quantization (VQ) codebook design utilize only the mean values of the observation densities in the HMM, the proposed DCNN utilizes both the mean and the covariance values. When compared with other conventional unsupervised neural networks, the DCNN successfully allocates more code vectors to the regions where GPDF data are densely distributed while it allocates fewer code vectors to the regions where GPDF data are sparsely distributed. When applied to Korean monophone recognition problems as a tool to reduce the size of the codebook, the DCNN reduced the number of GPDFs used for code vectors by 65.3% while preserving recognition accuracy. Experimental results with a divergence-based k-means algorithm and a divergence-based self-organizing map algorithm are also presented in this paper for a performance comparison.  相似文献   

2.
Vector quantization (VQ) for image compression requires expensive time to find the closest codevector in the encoding process. In this paper, a fast search algorithm is proposed for projection pyramid vector quantization using a lighter modified distortion with Hadamard transform of the vector. The algorithm uses projection pyramids of the vectors and codevectors after applying Hadamard transform and one elimination criterion based on deviation characteristic values in the Hadamard transform domain to eliminate unlikely codevectors. Experimental results are presented on image block data. These results confirm the effectiveness of the proposed algorithm with the same quality of the image as the full search algorithm.  相似文献   

3.
In this paper, a new hierarchical color quantization method based on self-organizing maps that provides different levels of quantization is presented. Color quantization (CQ) is a typical image processing task, which consists of selecting a small number of code vectors from a set of available colors to represent a high color resolution image with minimum perceptual distortion. Several techniques have been proposed for CQ based on splitting algorithms or cluster analysis. Artificial neural networks and, more concretely, self-organizing models have been usually utilized for this purpose. The self-organizing map (SOM) is one of the most useful algorithms for color image quantization. However, it has some difficulties related to its fixed network architecture and the lack of representation of hierarchical relationships among data. The growing hierarchical SOM (GHSOM) tries to face these problems derived from the SOM model. The architecture of the GHSOM is established during the unsupervised learning process according to the input data. Furthermore, the proposed color quantizer allows the evaluation of different color quantization rates under different codebook sizes, according to the number of levels of the generated neural hierarchy. The experimental results show the good performance of this approach compared to other quantizers based on self-organization.  相似文献   

4.
Image coding algorithms such as Vector Quantisation (VQ), JPEG and MPEG have been widely used for encoding image and video. These compression systems utilise block-based coding techniques to achieve a higher compression ratio. However, a cell loss or a random bit error during network transmission will permeate into the whole block, and then generate several damaged blocks. Therefore, an efficient Error Concealment (EC) scheme is essential for diminishing the impact of damaged blocks in a compressed image. In this paper, a novel adaptive EC algorithm is proposed to conceal the error for block-based image coding systems by using neural network techniques in the spatial domain. In the proposed algorithm, only the intra-frame information is used for reconstructing the image with damaged blocks. The information of pixels surrounding a damaged block is used to recover the errors using the neural network models. Computer simulation results show that the visual quality and the PSNR evaluation of a reconstructed image are significantly improved using the proposed EC algorithm.  相似文献   

5.
基于SOM神经网和K-均值算法的图像分割   总被引:2,自引:0,他引:2  
提出了一种基于SOM神经网络和K-均值的图像分割算法。SOM网络将多维数据映射到低维规则网格中,可以有效地用于大型数据的挖掘;而K-均值是一种动态聚类算法,适用于中小型数据的聚类。文中算法利用SOM网络将具有相似特征的象素S点映射到一个2-D神经网上,再根据神经元间的相似性,利用K-均值算法将神经元聚类。文中将该算法用于彩色图像的分割,并给出了经SOM神经网初聚类后,不同K值下神经元聚类对图像分割的结果及与单纯K-均值分割图像进行对比。  相似文献   

6.
In the construction of a smart marine, marine big data mining has a significant impact on the growing maritime industry in the Beibu Gulf. Clustering is the key technology of marine big data mining, but the conventional clustering algorithm cannot achieve the efficient clustering of marine data. According to the characteristics of marine big data, a marine big data clustering scheme based on self-organizing neural network (SOM) algorithm is proposed. First, the working principle of SOM algorithm is analyzed, and the algorithm's two-dimensional network model, similarity model and competitive learning model are focused. Secondly, combining with the working principle of algorithm, the marine big data clustering process and algorithm achievement based on SOM algorithm are developed; finally, experiments show that all vectors in marine big data clustering are stable, and the neurons in the output layer of clustering result have obvious consistency with the data itself, which shows the effectiveness of SOM algorithm in marine big data clustering.  相似文献   

7.
Based on the grey model (GM), a simple and fast methodology is developed for lossy image compression. First of all, the image is decomposed into some different-size image windows through the judgement of grey difference level; then the GM (1,1) of grey system theory is used as a fitter to model those window pixels. The proposed algorithms can be contrasted with the conventional compression techniques such as discrete cosine transform or vector quantization (VQ) algorithms in their dynamic modelling sequence and flexible block size. Especially, the compression and decompression process do not require an extra decoder and only utilize the modelling parameters to reconstruct the image by reversing the operation of GM (1,1). Experiments with some (512 x 512) images indicate that not only the average bit number per pixel and peak signal-to-noise ratio but also the coding time and decoding time of this lossy image compression algorithm based on GM (1,1) are better than those of block truncation coding with VQ.  相似文献   

8.
In this paper, an invisible hybrid color image hiding scheme based on spread vector quantization (VQ) neural network with penalized fuzzy c-means (PFCM) clustering technology (named SPFNN) is proposed. The goal is to offer safe exchange of a color stego-image in the internet. In the proposed scheme, the secret color image is first compressed by a spread-unsupervised neural network with PFCM based on interpolative VQ (IVQ), then the block cipher Data Encryption Standard (DES) and the Rivest, Shamir and Adleman (RSA) algorithms are hired to provide the mechanism of a hybrid cryptosystem for secure communication and convenient environment in the internet. In the SPFNN, the penalized fuzzy clustering technology is embedded in a two-dimensional Hopfield neural network in order to generate optimal solutions for IVQ. Then we encrypted color IVQ indices and sorted the codebooks of secret color image information and embedded them into the frequency domain of the cover color image by the Hadamard transform (HT). Our proposed method has two benefits comparing with other data hiding techniques. One is the high security and convenience offered by the hybrid DES and RSA cryptosystems to exchange color image data in the internet. The other benefit is that excellent results can be obtained using our proposed color image compression scheme SPFNN method.  相似文献   

9.
李庆忠  蒋萍  褚东升 《计算机工程》2007,33(20):219-221
提出了一种基于DCT变换的矢量自适应分类的全局矢量量化编码算法。为降低码矢的维数和计算复杂度,提高搜索速度和压缩比,将变换的DCT矢量自适应分类为平滑类、边缘类和纹理类,根据矢量的类别构造不同长度的变换矢量和根据矢量的类别分别采用改进的全局矢量量化算法进行相应的码书设计。为提高光照变化时相邻帧间矢量运动补偿的匹配率,在矢量构造中将DC系数单独进行编码。实验结果表明:该算法在信噪比和压缩比方面具有良好的视频压缩性能,比较适合于智能视频监控系统以及水下视频等光照随时间有较大变化的场合。  相似文献   

10.
In this study, an automatic image segmentation method is proposed for the tumor segmentation from mammogram images by means of improved watershed transform using prior information. The segmented results of individual regions are then applied to perform a loss and lossless compression for the storage efficiency according to the importance of region data. These are mainly performed in two procedures, including region segmentation and region compression. In the first procedure, the canny edge detector is used to detect the edge between the background and breast. An improved watershed transform based on intrinsic prior information is then adopted to extract tumor boundary. Finally, the mammograms are segmented into tumor, breast without tumor and background. In the second procedure, vector quantization (VQ) with competitive Hopfield neural network (CHNN) is applied on the three regions with different compression rates according to the importance of region data so as to simultaneously reserve important tumor features and reduce the size of mammograms for storage efficiency. Experimental results show that the proposed method gives promising results in the compression applications.  相似文献   

11.
郑美珠  赵景秀 《计算机应用》2011,31(9):2485-2488
针对在RGB空间很难有效区分颜色相似性的问题,选择HSI颜色空间进行图像处理和分析。首先计算饱和度、色度、亮度等色差分量,通过引入模糊熵,构造出一组基于模糊熵的信息测度分量来定量描述图像的边缘特征。利用训练样本获取该组分量,并组成一特征向量对BP神经网络进行训练,然后将训练的BP网络直接用于边缘检测。BP网络的结构和训练比较简单,而且不需要设定阈值检测边缘。实验表明,该方法具有较强的细节保持能力,达到了令人满意的边缘检测效果。  相似文献   

12.
Vector quantizer takes care of special image features like edges, and it belongs to the class of quantizers known as the second-generation coders. This paper proposes a novel vector quantization method using the wavelet transform and the enhanced SOM algorithm for the medical image compression. We propose the enhanced self-organizing algorithm to resolve the defects of the conventional SOM algorithm. The enhanced SOM, at first, reflects the error between the winner node and the input vector to the weight adaptation by using the frequency of the selection of the winner node. Secondly, it adjusts the weight in proportion to the present weight change and the previous one as well. To reduce the blocking effect and the computation requirement, we construct training image vectors involving image features by using the wavelet transform and apply the enhanced SOM algorithm to them for generating a well-defined codebook. Our experimental results have shown that the proposed method energizes the compression ratio and the decompression quality.  相似文献   

13.
Reducing the redundancy of dominant color features in an image and meanwhile preserving the diversity and quality of extracted colors is of importance in many applications such as image analysis and compression. This paper presents an improved self-organization map (SOM) algorithm namely MFD-SOM and its application to color feature extraction from images. Different from the winner-take-all competitive principle held by conventional SOM algorithms, MFD-SOM prevents, to a certain degree, features of non-principal components in the training data from being weakened or lost in the learning process, which is conductive to preserving the diversity of extracted features. Besides, MFD-SOM adopts a new way to update weight vectors of neurons, which helps to reduce the redundancy in features extracted from the principal components. In addition, we apply a linear neighborhood function in the proposed algorithm aiming to improve its performance on color feature extraction. Experimental results of feature extraction on artificial datasets and benchmark image datasets demonstrate the characteristics of the MFD-SOM algorithm.  相似文献   

14.
Codebook of conventional VQ cannot be generally used and needs real time onboard updating,which is hard to implement in spaceborne SAR system.In order to solve this problem,this paper analyses the characteristic of space-borne SAR raw data firstly,and then utilizes the distortion function of multidimensional space as criterion,and finally the adaptive code book design algorithm is proposed according to the joint probability density function of the input data.Besides,the feasibility of the new algorithm in cascade with entropy coding and the robustness of the algorithm when error occurs during transmission are analysed based on the encoding and decoding scheme.Experimental results of real data show that codebook deriving from the new algorithm can be generally used and designed off-line,which makes VQ a practical algorithm for space-borne SAR raw data compression.  相似文献   

15.
矢量量化是一种有效的数据压缩技术,由于其算法简单,具有较高的压缩率,因而被广泛应用于数据压缩编码领域。通过对图像块灰度特征的研究,根据图像的平滑与否,提出了对图像进行均值和矢量量化复合编码算法,该算法对平滑图像块采用均值编码,对非平滑块采用矢量量化编码。这不仅节省了平滑码字的存储空间,提高了码书存储效率,并且编码速度大大提高。同时采用码字旋转反色(2R)压缩算法将码书的存储容量减少到1/8,并结合最近邻块扩展搜索算法(EBNNS)对搜索算法进行优化。在保证图像画质的前提下,整个系统的图像编码速度比全搜索的普通矢量量化平均提高约7.7倍。  相似文献   

16.
目的 基于小波域的多尺度分块压缩感知重构算法忽略了高频信号在重构过程中的作用,丢失了大量的边缘与细节信息。针对上述问题,提出一种自适应多尺度分块压缩感知算法,不仅合理利用低频信息还充分利用图像的高频信息,在图像细节复杂度提高的情况下保证图像重构质量的提高。方法 首先进行3层小波变换,得到一个低频信号和9个高频信号,分别进行小波逆变换后分成大小相同互不重叠的块,对低频部分采用2维邻块边缘自适应加权滤波的方法进行处理,对高频部分采用纹理自适应分块采样,最后利用平滑投影Landweber(SPL)算法对其进行重构。结果 与已有的分块压缩感知算法、基于边缘和方向的分块压缩感知算法和基于纹理和方向的分块压缩感知算法相比,本文算法在不同的采样率下,性能均有所提升,代表细节信息的高频信号得到充分重建,改进的算法所得到的重建图像具有较高的分辨率,尤其对细节较为丰富的图像进行重建后具有较高的峰值信噪比;2维邻块边缘自适应加权滤波有效的去除了重建图像的块效应,且重建时间平均减少了0.3 s。结论 将三层小波变换后的高频分量作为纹理部分,利用自适应多尺度分块重建出图像的轮廓与边缘;将低频分量直接视为平坦部分,邻块边缘自适应加权滤波重建出图像细节,不仅充分利用了图像的高低频信息,还减少了平坦块检测过程,使得重建时间有效缩短。经实验验证,本文算法重建图像质量较好,尤其是对复杂图像明显消除了块效应,边缘和纹理细节较清晰。因此主要适用于纹理细节较复杂的人脸图像、建筑图像和遥感图像等。  相似文献   

17.
传统矢量量化器的码本普适性差,需要在线更新,难以在星载SAR系统中实现.文中针对星载SAR原始数据的统计特性,以多维空间上的失真函数为代价函数,根据输入数据的联合概率密度函数设计得到了具有普适性的矢量量化码本,分析了原始数据矢量量化编码以及解码方案.在此基础上,深入研究了矢量量化级联熵编码方案的可行性以及码字索引在信道传输发生误码时算法的稳健性.实际数据处理结果表明,文中算法具有普适性,矢量量化码本的普适性使得码本可以进行离线设计,为矢量量化的星载实用化提供了理论指导.  相似文献   

18.
文章提出了一个新的基于矢量量化的数字水印算法,与基于DCT(DiscreteCosineTransform)、DFT(DiscreteFourierTransform)及DWT(DiscreteWaveletTransform)等的传统水印算法不同,该算法利用码书分割方法和矢量量化索引的特点,在矢量量化的不同阶段分别嵌入水印来保护原始图像的版权,水印检测不需要原始图像。实验结果表明,该方法实现的水印具有良好的不可见性,并对JPEG压缩、矢量量化压缩、旋转以及剪切等空域操作也具有较好的稳健性。  相似文献   

19.
《Applied Soft Computing》2008,8(1):634-645
We proposed a vector quantization (VQ) with variable block size using local fractal dimensions (LFDs) of an image. A VQ with variable block size has so far been implemented using a quad tree (QT) decomposition algorithm. QT decomposition carries out image partitioning based on the homogeneity of local regions of an image. However, we think that the complexity of local regions of an image is more essential than the homogeneity, because we pay close attention to complex region than homogeneous region. Therefore, complex regions are essential for image compression. Since the complexity of regions of an image is quantified by values of LFD, we implemented variable block size using LFD values and constructed a codebook (CB) for a VQ. To confirm the performance of the proposed method, we only used a discriminant analysis and FGLA to construct a CB. Here, the FGLA is the algorithm to combine generalized Lloyd algorithm (GLA) and the fuzzy k means algorithm. Results of computational experiments showed that this method correctly encodes the regions that we pay close attention. This is a promising result for obtaining a well-perceived compressed image. Also, the performance of the proposed method is superior to that of VQ by FGLA in terms of both compression rate and decoded image quality. Furthermore, 1.0 bpp and more than 30 dB in PSNR by a CB with only 252 code-vectors were achieved using this method.  相似文献   

20.
文本聚类的核心问题是找到一种优化的聚类算法对文本向量进行聚类,是典型的高维数据聚类,提出一种基于自组织神经网络SOM和人工免疫网络aiNet的两阶段文本聚类算法TCBSA。新算法先用SOM神经网络进行聚类,把高维的文本数据映射到二维的平面上,然后再用aiNet对文本聚类。该方法利用SOM神经网络对高维数据降维的优点,克服了人工免疫网络对高维数据的聚类能力差的缺点。仿真实验结果表明该文本聚类算法不仅是可行的,而且具有一定的自适应能力和较好的聚类效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号