首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 265 毫秒
1.
由于现有的基于深度神经网络的显著性对象检测算法忽视了对象的结构信息,使得显著性图不能完整地覆盖整个对象区域,导致检测的准确率下降。针对此问题,提出一种结构感知的深度显著性对象检测算法。算法基于一种多流结构的深度神经网络,包括特征提取网络、对象骨架检测子网络、显著性对象检测子网络和跨任务连接部件四个部分。首先,在显著性对象子网络的训练和测试阶段,通过对象骨骼检测子网络学习对象的结构信息,并利用跨任务连接部件使得显著性对象检测子网络能自动编码对象骨骼子网络学习的信息,从而感知对象的整体结构,克服对象区域检测不完整问题;其次,为了进一步提高所提方法的准确率,利用全连接条件随机场对检测结果进行进一步的优化。在三个公共数据集上的实验结果表明,该算法在检测的准确率和运行效率上均优于现有存在的基于深度学习的算法,这也说明了在深度神经网络中考虑对象结构信息的捕获是有意义的,可以有助于提高模型准确率。  相似文献   

2.
This paper presents a framework for incremental neural learning (INL) that allows a base neural learning system to incrementally learn new knowledge from only new data without forgetting the existing knowledge. Upon subsequent encounters of new data examples, INL utilizes prior knowledge to direct its incremental learning. A number of critical issues are addressed including when to make the system learn new knowledge, how to learn new knowledge without forgetting existing knowledge, how to perform inference using both the existing and the newly learnt knowledge, and how to detect and deal with aged learnt systems. To validate the proposed INL framework, we use backpropagation (BP) as a base learner and a multi-layer neural network as a base intelligent system. INL has several advantages over existing incremental algorithms: it can be applied to a broad range of neural network systems beyond the BP trained neural networks; it retains the existing neural network structures and weights even during incremental learning; the neural network committees generated by INL do not interact with one another and each sees the same inputs and error signals at the same time; this limited communication makes the INL architecture attractive for parallel implementation. We have applied INL to two vehicle fault diagnostics problems: end-of-line test in auto assembly plants and onboard vehicle misfire detection. These experimental results demonstrate that the INL framework has the capability to successfully perform incremental learning from unbalanced and noisy data. In order to show the general capabilities of INL, we also applied INL to three general machine learning benchmark data sets. The INL systems showed good generalization capabilities in comparison with other well known machine learning algorithms.  相似文献   

3.
如何提高自然环境下或非受限环境下人脸属性识别的准确率是应用人脸属性的一个重要问题。在日常生活中,人脸姿势和光照等不可控制的因素对识别人脸属性产生了较大影响,如何在上述因素影响下提高识别的精度是我们研究人脸属性识别的关键问题。目前卷积神经网络(Convolutional neural network,CNN)在图像分类中已经取得显著性成果,本文通过采用多级子网络和排序性Dropout机制算法重新构建一个网络结构,该结构对处理人脸姿势变化等具有较强的鲁棒性,在CelebA数据集和LFWA数据集中取得较好的效果,且大大降低了网络体积。  相似文献   

4.
目的 传统图像处理的纹理滤波方法难以区分梯度较强的纹理与物体的结构,而深度学习方法使用的训练集生成方式不够合理,且模型表示方式比较粗糙,为此本文设计了一种面向纹理平滑的方向性滤波尺度预测模型,并生成了含有标签的新的纹理滤波数据集。方法 在现有结构图像中逐连通区域填充多种纹理图,生成有利于模型训练的纹理滤波数据集。设计了方向性滤波尺度预测模型,该模型包含尺度感知子网络和图像平滑子网络。前者预测得到的滤波尺度图不但体现了该像素与周围像素是否为同一纹理,而且还隐含了该像素是否为结构像素的信息。后者以滤波尺度图和原图的堆叠作为输入,凭借少量的卷积层快速得出纹理滤波的结果。结果 在本文的纹理滤波数据集上与7个算法进行比较,峰值信噪比(peak signal to noise ratio,PSNR)与结构相似度(structural similarity,SSIM)分别高于第2名2.79 dB、0.0133,均方误差(mean squared error,MSE)低于第2名6.863 8,运算速度快于第2名0.002 s。在其他数据集上的实验对比也显示出本文算法更好地保持结构与平滑纹理。通过比较不同数据集上训练的同一网络模型,证实了本文的纹理滤波数据集有助于增强模型对于强梯度纹理与物体结构的区分能力。结论 本文制作的纹理滤波数据集使模型更好地区分强梯度纹理与物体结构并增强模型的泛化能力。本文设计的方向性滤波尺度预测模型在性能上超越了已有的大多数纹理平滑方法,尤其在强梯度纹理的抑制和弱梯度结构的保持两个方面表现优异。  相似文献   

5.
Traditional nonlinear manifold learning methods have achieved great success in dimensionality reduction and feature extraction, most of which are batch modes. However, if new samples are observed, the batch methods need to be calculated repeatedly, which is computationally intensive, especially when the number or dimension of the input samples are large. This paper presents incremental learning algorithms for Laplacian eigenmaps, which computes the low-dimensional representation of data set by optimally preserving local neighborhood information in a certain sense. Sub-manifold analysis algorithm together with an alternative formulation of linear incremental method is proposed to learn the new samples incrementally. The locally linear reconstruction mechanism is introduced to update the existing samples’ embedding results. The algorithms are easy to be implemented and the computation procedure is simple. Simulation results testify the efficiency and accuracy of the proposed algorithms.  相似文献   

6.
周鹏 《计算机应用研究》2023,40(6):1728-1733
目前已有的手指运动想象脑电信号多分类任务的分类性能均难以达到可用性能。在详细分析脑电信号时间尺度上的多种成分的基础上,设计一种信号子段提取的自监督子网络,然后把子段输入下一个子网络用于信号分类,两个子网综合成一个自监督混合的多任务深度网络。在训练阶段,子段提取子网络针对每条脑电信号提取不同的子段,由后面的分类子网络来判断该子段是否最佳而自动调整子段位置,总体损失函数由两个子网络的两个损失函数加权而成,通过整体网络学习算法实现最佳子段信号的提取并获得最佳分类效果。验证和测试阶段,子段提取子网络按照训练完成的参数自动提取相应的子段输入分类子网络进行分类。在the largest SCP data of Motor-Imagery和BCI Competition IV中Data sets 4数据集上进行网络性能验证,SCP数据集上全部受试者3指分类任务的平均测试分类准确率达70%以上,4指平均测试分类准确率达60%左右,5指平均测试分类准确率达50%左右,比现有的报道有明显的提升。证实该网络能够有效地提取出运动想象脑电信号子段,具有良好的分类效果和泛化性能。  相似文献   

7.
基于小样本学习的图像分类技术综述   总被引:2,自引:0,他引:2  
图像分类的应用场景非常广泛, 很多场景下难以收集到足够多的数据来训练模型, 利用小样本学习进行图像分类可解决训练数据量小的问题. 本文对近年来的小样本图像分类算法进行了详细综述, 根据不同的建模方式, 将现有算法分为卷积神经网络模型和图神经网络模型两大类, 其中基于卷积神经网络模型的算法包括四种学习范式: 迁移学习、元学习、对偶学习和贝叶斯学习; 基于图神经网络模型的算法原本适用于非欧几里得结构数据, 但有部分学者将其应用于解决小样本下欧几里得数据的图像分类任务, 有关的研究成果目前相对较少. 此外, 本文汇总了现有文献中出现的数据集并通过实验结果对现有算法的性能进行了比较. 最后, 讨论了小样本图像分类技术的难点及未来研究趋势.  相似文献   

8.
Zeng-Guang   《Automatica》2001,37(12)
A recurrent neural network for dynamical hierarchical optimization of nonlinear discrete large-scale systems is presented. The proposed neural network consists of hierarchically structured sub-networks: one coordination sub-network at the upper level and several local optimization sub-networks at the lower level. In particular, the coordination sub-network and the local optimization sub-networks work simultaneously. This feature makes the proposed method outperform in computational efficiency the conventional iterative algorithms where there usually exists an alternately waiting time during the coordination and local optimization processes. Moreover, the state equations of the subsystems of the large-scale system are imbedded into their corresponding local optimization sub-networks. This imbedding technique not only overcomes the difficulty in treating the constraints imposed by the state equations, but also leads to significant reduction in the network size. We present stability analysis to prove that the neural network is asymptotically stable and this stable state corresponds to the optimal solution to the original optimal control problem. Finally, we illustrate the performance of the proposed method by an example.  相似文献   

9.
Several recent works have studied feature evolvable learning. They usually assume that features would not vanish or appear in an arbitrary way; instead, old features vanish and new features emerge as the hardware device collecting the data features is replaced. However, the existing learning algorithms for feature evolution only utilize the first-order information of data streams and ignore the second-order information which can reveal the correlations between features and thus significantly improve the classification performance. We propose a Confidence-Weighted learning for Feature Evolution (CWFE) algorithm to solve the aforementioned problem. First, second-order confidence-weighted learning is introduced to update the prediction model. Next, to make full use of the learned model, a linear mapping is learned in the overlapping period to recover the old features. Then, the existing model is updated with the recovered old features and, at the same time, a new prediction model is learned with the new features. Furthermore, two ensemble methods are introduced to utilize the two models. Finally, experimental studies show that the proposed algorithms outperform existing feature evolvable learning algorithms.  相似文献   

10.
Incremental learning has been widely addressed in the machine learning literature to cope with learning tasks where the learning environment is ever changing or training samples become available over time. However, most research work explores incremental learning with statistical algorithms or neural networks, rather than evolutionary algorithms. The work in this paper employs genetic algorithms (GAs) as basic learning algorithms for incremental learning within one or more classifier agents in a multiagent environment. Four new approaches with different initialization schemes are proposed. They keep the old solutions and use an "integration" operation to integrate them with new elements to accommodate new attributes, while biased mutation and crossover operations are adopted to further evolve a reinforced solution. The simulation results on benchmark classification data sets show that the proposed approaches can deal with the arrival of new input attributes and integrate them with the original input space. It is also shown that the proposed approaches can be successfully used for incremental learning and improve classification rates as compared to the retraining GA. Possible applications for continuous incremental training and feature selection are also discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号