首页 | 官方网站   微博 | 高级检索  
     

拉普拉斯阶梯网络
引用本文:胡聪,吴小俊,舒振球,陈素根.拉普拉斯阶梯网络[J].软件学报,2020,31(5):1525-1535.
作者姓名:胡聪  吴小俊  舒振球  陈素根
作者单位:江南大学人工智能与计算机学院,江苏无锡214122;江南大学物联网工程学院,江苏无锡214122;江苏理工学院计算机工程学院,江苏常州213001;安庆师范大学数学与计算科学学院,安徽安庆246133
基金项目:国家自然科学基金(61373055,61672265,61603159,61702012,U1836218);教育部111引智计划(B12018);江苏省自然科学基金(BK20160293);安徽省高等学校优秀青年人才支持计划(gxyq2017026)
摘    要:阶梯网络不仅是一种基于深度学习的特征提取器,而且能够应用于半监督学习中.深度学习在实现了复杂函数逼近的同时,也缓解了多层神经网络易陷入局部最小化的问题.传统的自编码、玻尔兹曼机等方法易忽略高维数据的低维流形结构信息,使用这些方法往往会获得无意义的特征表示,这些特征不能有效地嵌入到后续的预测或识别任务中.从流形学习的角度出发,提出一种基于阶梯网络的深度表示学习方法,即拉普拉斯阶梯网络LLN (Laplacian ladder network).拉普拉斯阶梯网络在训练的过程中不仅对每一编码层嵌入噪声并进行重构,而且在各重构层引入图拉普拉斯约束,将流形结构嵌入到多层特征学习中,以提高特征提取的鲁棒性和判别性.在有限的有标签数据情况下,拉普拉斯阶梯网络将监督学习损失和非监督损失融合到了统一的框架进行半监督学习.在标准手写数据数据集MNIST和物体识别数据集CIFAR-10上进行了实验,结果表明,相对于阶梯网络和其他半监督方法,拉普拉斯阶梯网络都得到了更好的分类效果,是一种有效的半监督学习算法.

关 键 词:阶梯网络  流形正则化  图拉普拉斯  深度自编码  半监督学习
收稿时间:2018/5/3 0:00:00
修稿时间:2018/6/16 0:00:00

Laplacian Ladder Networks
HU Cong,WU Xiao-Jun,SHU Zhen-Qiu,CHEN Su-Gen.Laplacian Ladder Networks[J].Journal of Software,2020,31(5):1525-1535.
Authors:HU Cong  WU Xiao-Jun  SHU Zhen-Qiu  CHEN Su-Gen
Affiliation:School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China;School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China;School of Computer Engineering, Jiangsu University of Technology, Changzhou 213001, China; School of Mathematics and Computational Science, Anqing Normal University, Anqing 246133, China
Abstract:Ladder networks is not only an effective deep learning-based feature extractor, but also can be applied on semi-supervised learning. Deep learning has the advantage of approximating the complicated function and alleviating the optimization difficulty associated with deep models. Autoencoders and restricted Boltzmann machines ignore the manifold information of high-dimensional data and usually achieve unmeaning features which are very difficult to use in the subsequent tasks, such as prediction and recognition. From the perspective of manifold learning, a novel deep representation method Laplacian ladder networks (LLN) is proposed, which is based on ladder networks (LN). When training LLN, LLN reconstructs noisy input and encoder layers, and adds graph Laplacian constrains to learn hierarchical representations for improving the robustness and discrimination of system. Under the condition of limited labeled data, LLN fuses the supervised learning and unsupervised learning to training in a semi-supervised manner. This study performs the experiments on the MNIST and CIFAR-10 datasets. Experimental results show that the proposed method LLN achieves superior performance compared with LN and other semi-supervised methods, and it is an effective semi-supervised method.
Keywords:ladder network  manifold regularization  graph Laplacian  deep autoencoder  semi-supervised learning
本文献已被 万方数据 等数据库收录!
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号