首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 171 毫秒
1.
半监督拉普拉斯特征映射算法   总被引:1,自引:0,他引:1  
为了使流形学习方法具有半监督的特点,利用流形上某些已知低维信息的数据去学习推测出其它数据的低维信息,扩大流形学习算法的应用范围,把拉普拉斯特征映射算法(Laplacian Eigenmap,LE)与半监督的机器学习相结合,提出一种半监督的拉普拉斯特征映射算法(semi-supervised Laplacian Eigenmap,SSLE),这种半监督的流形学习算法在分类识别等问题上,具有很好的效果.模拟实验和实际例子都表明了SSLE算法的有效性.  相似文献   

2.
深度卷积神经网络学习的图像特征表示具有明显的层次结构.随着层数加深,学习的特征逐渐抽象,类的判别性也逐渐增强.基于此特点,文中提出面向图像检索的深度汉明嵌入哈希编码方式.在深度卷积神经网络的末端插入一层隐藏层,依据每个单元的激活情况获得图像的哈希编码.同时根据哈希编码本身的特征提出汉明嵌入损失,更好地保留原数据之间的相似性.在CIFAR-10、NUS-WIDE基准图像数据集上的实验表明,文中方法可以提升图像检索性能,较好改善短编码下的检索性能.  相似文献   

3.
针对传统的用电负荷数据异常检测方法精度低、提取时间特征困难、特征提取与检测过程分离的问题,提出一种基于深度卷积嵌入LSTM编码器(deep convolution embedded LSTM auto-encoder, DCE-LAE)的电力负荷数据异常检测方法。将长短期记忆网络融入自编码器架构,利用编码器的非线性特征提取能力和长短期记忆网络的时序特征记忆能力提高电力负荷的异常检测精度,将深度卷积层嵌入至该架构中提高感受野,提取更多时间序列特征;将卷积损失和重构损失相结合作为损失函数联合优化,防止卷积嵌入微调对重构空间的扭曲,进一步提高结果的可靠性。实例仿真通过与其它方法进行对比,验证了DCE-LAE的异常检测精度与时序重构能力均优于其它算法。  相似文献   

4.
提出了基于Grassmann流形的半监督图像集鉴别分析方法。该方法将子空间表示成Grassmann流形上的点,分别用一组单位正交基表示。通过Grassmann核函数,度量子空间的相似度。不同于其他基于Grassmann流形的图像集鉴别分析,引入图嵌入框架,通过保持数据局部邻域结构的同时,最大化不同类别数据的距离,得到最优投影矩阵,并在投影空间中进行图像集分类。采用半监督学习,对于未标记样本,根据其最近邻类别进行估计。实验表明,该方法取得了优于其他图像集识别算法的效果。  相似文献   

5.
卷积神经网络(CNN)在半监督学习中取得了良好的成绩,其在训练阶段既利用有标记样本,也利用无标记样本帮助规范化学习模型。为进一步加强半监督模型的特征学习能力,提高其在图像分类时的性能表现,本文提出一种联合深度半监督卷积神经网络和字典学习的端到端半监督学习方法,称为Semi-supervised Learning based on Sparse Coding and Convolution(SSSConv);该算法框架旨在学习到鉴别性更强的图像特征表示。SSSConv首先利用CNN提取特征,并对所提取特征进行正交投影变换,下一步通过学习其稀疏编码的低维嵌入以得到图像的特征表示,最后据此进行分类。整个模型框架可进行端到端的半监督学习训练,CNN提取特征部分和稀疏编码字典学习部分具有统一的损失函数,目标一致。本文利用共轭梯度下降算法、链式法则和反向传播等算法对目标函数的参数进行优化,将稀疏编码的相关参数约束于流形上,CNN参数既可定义在欧氏空间,也可以进一步定义在正交空间中。基于半监督分类任务的实验结果验证了所提出SSSConv框架的有效性,与现有方法相比具有较强的竞争力。  相似文献   

6.
语音段的有效表示方法存在易混淆语种和短时语音段识别率较低等问题,为满足不同时长和方言的识别要求,提出基于深度神经网络不同层的有效语音段表示方法.采用含有中间瓶颈层的深层神经网络作为前端特征提取,综合利用该网络的输出层和中间瓶颈层输出结果,得到不同形式的语音段表示并用于语种识别.在美国国家标准技术局语种识别评测2009年和2011年阿拉伯方言数据集上验证了方法的有效性.  相似文献   

7.
针对有标签数据不足及传统故障诊断模型判别性差的问题,本文提出一种流形结构化半监督扩展字典学习(MS-SSEDL)的故障诊断方法.首先,为改善缺少有标签数据而导致模型的识别性能较差问题,在MS-SSEDL模型中提出无标签数据重构误差项,利用无标签数据学习置信度矩阵,从而学习得到扩展字典以增强字典学习的表示性.然后,为增强MS-SSEDL模型的判别性,通过保存数据的流形结构,学习数据中内在几何信息的稀疏表示,增强信号表示能力及字典判别性.最后,在数字图像、轴承故障及齿轮故障公共数据集的实验表明所提MS-SSEDL方法比其他先进方法的识别性能更优越.  相似文献   

8.
提出了一种面向行为识别的拉普拉斯特征映射算法的改进方法.首先,将Kinect提供的关节点数据作为姿态特征,采用Levenstein距离改进流形学习算法中的拉普拉斯特征映射算法,并映射到二维空间得到待识别行为的嵌入空间;其次,结合待识别行为的嵌入空间和训练数据建立先验模型;最后,通过重新设计的粒子动态模型和观察模型,采用粒子滤波算法进行行为识别.实验结果表明,该方法可以对重复动作、遮挡,以及动作幅度和速度都有明显差异的行为进行较好的识别,总体识别率达到92.4%.  相似文献   

9.
深度置信网络(deep belief network,DBN)通过逐层无监督学习进行训练,但训练过程中易产生大量冗余特征,进而影响特征提取能力。为了使模型更具有解释和辨别能力,基于对灵长类视觉皮层分析的启发,在无监督学习阶段的似然函数中引入惩罚正则项,使用CD(contrastive divergence)训练最大化目标函数的同时,通过稀疏约束获得训练集的稀疏分布,可以使无标签数据学习到直观的特征表示。其次,针对稀疏正则项中存在的不变性问题,提出一种改进的稀疏深度置信网络,使用拉普拉斯函数的分布诱导隐含层节点的稀疏状态,同时将该分布中的位置参数用来控制稀疏的力度,即根据隐藏单元的激活概率与给定稀疏值的偏差程度而具有不同的稀疏水平。通过在MNIST和Pendigits手写体数据集上进行验证分析,并与多种现有方法相比,该方法始终达到最好识别准确度,并且具有良好的稀疏性能。  相似文献   

10.
张亮  杜子平  张俊  李杨 《计算机工程》2011,37(9):216-217,220
仿射传播方法难以处理具有流形结构的数据集。为此,提出一种基于拉普拉斯特征映射的仿射传播聚类算法(APPLE),在标准仿射传播的基础上增强流形学习的能力。使用测地距离计算数据点间相似度,采用拉普拉斯特征映射对数据集进行降维及特征提取。对图像聚类应用的实验结果证明了APPLE的聚类效果优于标准仿射传播方法。  相似文献   

11.
丁世飞  张楠  史忠植 《软件学报》2017,28(10):2599-2610
极速学习机不仅仅是有效的分类器,还能应用到半监督学习中.但是,半监督极速学习机和拉普拉斯光滑孪生支持向量机一样是一种浅层学习算法.深度学习实现了复杂函数的逼近并缓解了以前多层神经网络算法的局部最小性问题,目前在机器学习领域中引起了广泛的关注.多层极速学习机(ML-ELM)是根据深度学习和极速学习机的思想提出的算法,通过堆叠极速学习机-自动编码器算法(ELM-AE)构建多层神经网络模型,不仅实现复杂函数的逼近,并且训练过程中无需迭代,学习效率高.我们把流形正则化框架引入ML-ELM中提出拉普拉斯多层极速学习机算法(Lap-ML-ELM).然而,ELM-AE不能很好的解决过拟合问题,针对这一问题我们把权值不确定引入ELM-AE中提出权值不确定极速学习机-自动编码器算法(WU-ELM-AE),它学习到更为鲁棒的特征.最后,我们在前面两种算法的基础上提出权值不确定拉普拉斯多层极速学习机算法(WUL-ML-ELM),它堆叠WU-ELM-AE构建深度模型,并且用流形正则化框架求取输出权值,该算法在分类精度上有明显提高并且不需花费太多的时间.实验结果表明,Lap-ML-ELM与WUL-ML-ELM都是有效的半监督学习算法.  相似文献   

12.
莫建文  贾鹏 《自动化学报》2022,48(8):2088-2096
为了提高半监督深层生成模型的分类性能, 提出一种基于梯形网络和改进三训练法的半监督分类模型. 该模型在梯形网络框架有噪编码器的最高层添加3个分类器, 结合改进的三训练法提高图像分类性能. 首先, 用基于类别抽样的方法将有标记数据分为3份, 模型以有标记数据的标签误差和未标记数据的重构误差相结合的方式调整参数, 训练得到3个Large-margin Softmax分类器; 接着, 用改进的三训练法对未标记数据添加伪标签, 并对新的标记数据分配不同权重, 扩充训练集; 最后, 利用扩充的训练集更新模型. 训练完成后, 对分类器进行加权投票, 得到分类结果. 模型得到的梯形网络的特征有更好的低维流形表示, 可以有效地避免因为样本数据分布不均而导致的分类误差, 增强泛化能力. 模型分别在MNIST数据库, SVHN数据库和CIFAR10数据库上进行实验, 并且与其他半监督深层生成模型进行了比较, 结果表明本文所提出的模型得到了更高的分类精度.  相似文献   

13.
Ma  Xueqi  Tao  Dapeng  Liu  Weifeng 《Multimedia Tools and Applications》2019,78(10):13313-13329

The ever-growing popularity of mobile networks and electronics has prompted intensive research on multimedia data (e.g. text, image, video, audio, etc.) management. This leads to the researches of semi-supervised learning that can incorporate a small number of labeled and a large number of unlabeled data by exploiting the local structure of data distribution. Manifold regularization and pairwise constraints are representative semi-supervised learning methods. In this paper, we introduce a novel local structure preserving approach by considering both manifold regularization and pairwise constraints. Specifically, we construct a new graph Laplacian that takes advantage of pairwise constraints compared with the traditional Laplacian. The proposed graph Laplacian can better preserve the local geometry of data distribution and achieve the effective recognition. Upon this, we build the graph regularized classifiers including support vector machines and kernel least squares as special cases for action recognition. Experimental results on a multimodal human action database (CAS-YNU-MHAD) show that our proposed algorithms outperform the general algorithms.

  相似文献   

14.
Multimedia understanding for high dimensional data is still a challenging work, due to redundant features, noises and insufficient label information it contains. Graph-based semi-supervised feature learning is an effective approach to address this problem. Nevertheless, Existing graph-based semi-supervised methods usually depend on the pre-constructed Laplacian matrix but rarely modify it in the subsequent classification tasks. In this paper, an adaptive local manifold learning based semi-supervised feature selection is proposed. Compared to the state-of-the-art, the proposed algorithm has two advantages: 1) Adaptive local manifold learning and feature selection are integrated jointly into a single framework, where both the labeled and unlabeled data are utilized. Besides, the correlations between different components are also considered. 2) A group sparsity constraint, i.e. l 2?,?1-norm, is imposed to select the most relevant features. We also apply the proposed algorithm to serval kinds of multimedia understanding applications. Experimental results demonstrate the effectiveness of the proposed algorithm.  相似文献   

15.
近年来,基于大规模标记数据集的深度神经网络模型在图像领域展现出优秀的性能,但是大量标记数据昂贵且难以收集。为了更好地利用无标记数据,提出了一种半监督学习方法Wasserstein consistency training(WCT), 通过引入Jensen-Shannon散度来模拟协同训练并组织大量未标记数据来提高协同训练效率,通过快速梯度符号攻击施加的对抗攻击来生成对抗样本以鼓励视图的差异,将Wasserstein距离作为网络差异约束的度量,以防止深度神经网络崩溃,使网络在低维流形空间上平滑输出。实验结果表明,所提方法在MNIST分类错误率为0.85%,在仅使用4?000个标记数据的CIFAR-10数据集上错误率达到11.96%,证明了所提方法在小样本条件下的半监督图像分类中具有较好的性能。  相似文献   

16.
In recent years, learning on manifolds has attracted much attention in the academia community. The idea that the distribution of real-life data forms a low dimensional manifold embedded in the ambient space works quite well in practice, with applications such as ranking, dimensionality reduction, semi-supervised learning and clustering. This paper focuses on ranking on manifolds. Traditional manifold ranking methods try to learn a ranking function that varies smoothly along the data manifold by using a Laplacian regularizer. However, the Laplacian regularization suffers from the issue that the solution is biased towards constant functions. In this work, we propose using second-order Hessian energy as regularization for manifold ranking. Hessian energy overcomes the above issue by only penalizing accelerated variation of the ranking function along the geodesics of the data manifold. We also develop a manifold ranking framework for general graphs/hypergraphs for which we do not have an original feature space (i.e. the ambient space). We evaluate our ranking method on the COREL image dataset and a rich media dataset crawled from Last.fm. The experimental results indicate that our manifold ranking method is effective and outperforms traditional graph Laplacian based ranking method.  相似文献   

17.
Graph-based semi-supervised learning (GSSL) attracts considerable attention in recent years. The performance of a general GSSL method relies on the quality of Laplacian weighted graph (LWR) composed of the similarity imposed on input examples. A key for constructing an effective LWR is on the proper selection of the neighborhood size K or ε on the construction of KNN graph or ε-neighbor graph on training samples, which constitutes the fundamental elements in LWR. Specifically, too large K or ε will result in “shortcut” phenomenon while too small ones cannot guarantee to represent a complete manifold structure underlying data. To this issue, this study attempts to propose a method, called adaptive Laplacian graph trimming (ALGT), to make an automatic tuning to cut improper inter-cluster shortcut edges while enhance the connection between intra-cluster samples, so as to adaptively fit a proper LWR from data. The superiority of the proposed method is substantiated by experimental results implemented on synthetic and UCI data sets.  相似文献   

18.
Traditional classifiers including support vector machines use only labeled data in training. However, labeled instances are often difficult, costly, or time consuming to obtain while unlabeled instances are relatively easy to collect. The goal of semi-supervised learning is to improve the classification accuracy by using unlabeled data together with a few labeled data in training classifiers. Recently, the Laplacian support vector machine has been proposed as an extension of the support vector machine to semi-supervised learning. The Laplacian support vector machine has drawbacks in its interpretability as the support vector machine has. Also it performs poorly when there are many non-informative features in the training data because the final classifier is expressed as a linear combination of informative as well as non-informative features. We introduce a variant of the Laplacian support vector machine that is capable of feature selection based on functional analysis of variance decomposition. Through synthetic and benchmark data analysis, we illustrate that our method can be a useful tool in semi-supervised learning.  相似文献   

19.
Extreme learning machine (ELM) works for generalized single-hidden-layer feedforward networks (SLFNs), and its essence is that the hidden layer of SLFNs need not be tuned. But ELM only utilizes labeled data to carry out the supervised learning task. In order to exploit unlabeled data in the ELM model, we first extend the manifold regularization (MR) framework and then demonstrate the relation between the extended MR framework and ELM. Finally, a manifold regularized extreme learning machine is derived from the proposed framework, which maintains the properties of ELM and can be applicable to large-scale learning problems. Experimental results show that the proposed semi-supervised extreme learning machine is the most cost-efficient method. It tends to have better scalability and achieve satisfactory generalization performance at a relatively faster learning speed than traditional semi-supervised learning algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号