共查询到19条相似文献,搜索用时 140 毫秒
1.
2.
时间序列数据具有非离散性、数据之间的时序相关性、特征空间维度大等特点,当前大多数分类方法需要经过复杂的数据处理或特征工程,未考虑到时间序列具有不同时间尺度特征以及序列数据之间的时序依赖。通过结合卷积神经网络和循环神经网络中的双向门控循环单元,提出了一个新的端对端深度学习神经网络模型BiGRU-FCN,不需要对数据进行复杂的预处理,并且通过不同的网络运算来获取多种特征信息,如卷积神经网络在时序信息上的空间特征以及双向循环神经网络在序列上的双向时序依赖特征,对单维时间序列进行分类。在大量的基准数据集上对模型进行实验与评估,实验结果表明,与现有的多种方法相比,所提出的模型具有更高的准确率,具有很好的分类效果。 相似文献
3.
为保证设备正常运行并准确预测轴承剩余寿命,提出二维卷积神经网络与改进WaveNet组合的寿命预测模型.为克服未优化的递归网络在预测训练过程中易出现梯度消失问题,该模型引入了WaveNet时序网络结构.针对原始WaveNet结构不适用滚动轴承振动数据情况,将WaveNet结构改进与二维卷积神经网络结合应用于滚动轴承寿命预测.模型利用二维卷积网络提取一维振动序列的特征,随后特征输入WaveNet并进行滚动轴承的预测寿命.改进模型相比于深度循环网络计算效率更高、结果更准确,相比于原始CNN-WaveNet-O模型预测结果更准确.相比于深度长短期记忆网络模型,改进方法预测结果均方根误差降低了11.04%,评分函数降低了11.34%. 相似文献
4.
海量文本分析是实现大数据理解和价值发现的重要手段,其中文本分类作为自然语言处理的经典问题受到研究者广泛关注,而人工神经网络在文本分析方面的优异表现使其成为目前的主要研究方向。在此背景下,介绍卷积神经网络、时间递归神经网络、结构递归神经网络和预训练模型等主流方法在文本分类中应用的发展历程,比较不同模型基于常用数据集的分类效果,表明利用人工神经网络结构自动获取文本特征,可避免繁杂的人工特征工程,使文本分类效果得到提升。在此基础上,对未来文本分类的研究方向进行展望。 相似文献
5.
针对循环神经网络(Recurrent neural networks, RNNs)一阶优化算法学习效率不高和二阶优化算法时空开销过大, 提出一种新的迷你批递归最小二乘优化算法. 所提算法采用非激活线性输出误差替代传统的激活输出误差反向传播, 并结合加权线性最小二乘目标函数关于隐藏层线性输出的等效梯度, 逐层导出RNNs参数的迷你批递归最小二乘解. 相较随机梯度下降算法, 所提算法只在RNNs的隐藏层和输出层分别增加了一个协方差矩阵, 其时间复杂度和空间复杂度仅为随机梯度下降算法的3倍左右. 此外, 本文还就所提算法的遗忘因子自适应问题和过拟合问题分别给出一种解决办法. 仿真结果表明, 无论是对序列数据的分类问题还是预测问题, 所提算法的收敛速度要优于现有主流一阶优化算法, 而且在超参数的设置上具有较好的鲁棒性. 相似文献
6.
深度记忆网络研究进展 总被引:3,自引:0,他引:3
近年来,随着深度神经网络的快速发展,它在越来越多的领域中有了广泛的应用.深度神经网络模型在处理有序列依赖关系的预测问题时,需要利用之前学习到的信息进行记忆.在一般的神经网络模型中,数据经过多个神经元节点传输会损失很多关键的信息,因此需要具有记忆能力的神经网络模型,我们把它们统称为记忆网络.本文首先介绍了记忆网络的基础模型,包括循环神经网络(RNN)、长短期记忆神经网络(LSTM)、神经图灵机(NTM)、记忆神经网络(MN)和变送器(Transformer).其中,RNN和LSTM是通过隐单元对前一时刻信息的处理来记忆信息,NTM和NM是通过使用外部存储器来进行记忆,而变送器使用注意力机制来选择性记忆.本文对这些模型进了对比,并分析了各个记忆方法的问题和不足.然后根据基础模型的不同,本文对常见的记忆网络模型进行了系统的阐述、分类和总结,包括其模型结构和算法.接着介绍了记忆网络在不同领域和场景下的应用,最后对记忆网络的未来研究方向进行了展望. 相似文献
7.
长短期记忆网络(long short term memory,LSTM)是一种能长久储存序列信息的循环神经网络,在语言模型、语音识别、机器翻译等领域都得到了广泛的应用。先研究了前人如何将LSTM中的记忆模块拓展到语法树得到LSTM树结构网络模型,以获取和储存句子深层次的语义结构信息;然后针对句子词语间的极性转移在LSTM树结构网络模型中添加了极性转移信息提出了极性转移LSTM树结构网络模型,更好获取情感信息来进行句子分类。实验表明在Stanford sentiment tree-bank数据集上,提出的极性转移LSTM树结构网络模型的句子分类效果优于LSTM、递归神经网络等模型。 相似文献
8.
由于电网故障告警信息较为密集,现有方法没有考虑对数据离散化处理,导致增加分类算法难度,降低对样本分类能力.提出基于循环神经网络的电网故障智能告警信息分类方法.构建混合本体集成模型,利用基于本体的方法进行电网故障智能告警信息集成,对集成到的数据信息进行去噪、填补、离散化处理,获取优化的循环神经网络结构和参数,利用循环神经网络模型实现电网故障智能告警信息分类.结果 表明:所提方法得到的F1值和G-means值均要更高,分类耗时远远低于现有方法,分类平稳性较高,具有较好的实际应用价值. 相似文献
9.
递归神经网络(RNNs)创作歌曲,但是缺乏全局性、结构不完整.而长的短时记忆(LSTM)恰恰具有全局性特点.本文以中国五声调式民族乐曲为对象,修改乐曲输入,使用改进的LSTM为模型,以不同的方式学习训练,并最终为已知旋律配和声. 相似文献
10.
11.
12.
Markovian architectural bias of recurrent neural networks 总被引:5,自引:0,他引:5
In this paper, we elaborate upon the claim that clustering in the recurrent layer of recurrent neural networks (RNNs) reflects meaningful information processing states even prior to training. By concentrating on activation clusters in RNNs, while not throwing away the continuous state space network dynamics, we extract predictive models that we call neural prediction machines (NPMs). When RNNs with sigmoid activation functions are initialized with small weights (a common technique in the RNN community), the clusters of recurrent activations emerging prior to training are indeed meaningful and correspond to Markov prediction contexts. In this case, the extracted NPMs correspond to a class of Markov models, called variable memory length Markov models (VLMMs). In order to appreciate how much information has really been induced during the training, the RNN performance should always be compared with that of VLMMs and NPMs extracted before training as the "" base models. Our arguments are supported by experiments on a chaotic symbolic sequence and a context-free language with a deep recursive structure. 相似文献
13.
Recurrent neural network technique for behavioral modeling of power amplifier with memory effects 下载免费PDF全文
A new technique for behavioral modeling of power amplifier (PA) with short‐ and long‐term memory effects is presented here using recurrent neural networks (RNNs). RNN can be trained directly with only the input–output data without having to know the internal details of the circuit. The trained models can reflect the behavior of nonlinear circuits. In our proposed technique, we extract slow‐changing signals from the inputs and outputs of the PA and use these signals as extra inputs of RNN model to effectively represent long‐term memory effects. The methodology using the proposed RNN for modeling short‐term and long‐term memory effects is discussed. Examples of behavioral modeling of PAs with short‐ and long‐term memory using both the existing dynamic neural networks and the proposed RNNs techniques are shown. © 2014 Wiley Periodicals, Inc. Int J RF and Microwave CAE 25:289–298, 2015. 相似文献
14.
《Neural Networks, IEEE Transactions on》2009,20(8):1267-1280
15.
Meiqin Liu 《Neural computing & applications》2009,18(8):861-874
In order to conveniently analyze the stability of various discrete-time recurrent neural networks (RNNs), including bidirectional
associative memory, Hopfield, cellular neural network, Cohen-Grossberg neural network, and recurrent multiplayer perceptrons,
etc., the novel neural network model, named standard neural network model (SNNM) is advanced to describe this class of discrete-time
RNNs. The SNNM is the interconnection of a linear dynamic system and a bounded static nonlinear operator. By combining Lyapunov
functional with S-Procedure, some useful criteria of global asymptotic stability for the discrete-time SNNMs are derived,
whose conditions are formulated as linear matrix inequalities. Most delayed (or non-delayed) RNNs can be transformed into
the SNNMs to be stability analyzed in a unified way. Some application examples of the SNNMs to the stability analysis of the
discrete-time RNNs shows that the SNNMs make the stability conditions of the RNNs easily verified. 相似文献
16.
In recent years, gene regulatory networks (GRNs) have been proposed to work as reliable and robust control mechanisms for robots. Because recurrent neural networks (RNNs) have the unique characteristic of presenting system dynamics over time, we thus adopt such kind of network structure and the principles of gene regulation to develop a biologically and computationally plausible GRN model for robot control. To simulate the regulatory effects and to make our model inferable from time-series data, we also implement an enhanced network-learning algorithm to derive network parameters efficiently. In addition, we present a procedure of programming-by-demonstration to collect behavior sequence data of the robot as expression profiles, and then employ our network-modeling framework to infer controllers. To verify the proposed approach, experiments have been conducted, and the results show that our regulatory model can be inferred for robot control successfully. 相似文献
17.
18.
In this paper, we present nonmonotone variants of the Levenberg–Marquardt (LM) method for training recurrent neural networks
(RNNs). These methods inherit the benefits of previously developed LM with momentum algorithms and are equipped with nonmonotone
criteria, allowing temporal increase in training errors, and an adaptive scheme for tuning the size of the nonmonotone slide
window. The proposed algorithms are applied to training RNNs of various sizes and architectures in symbolic sequence-processing
problems. Experiments show that the proposed nonmonotone learning algorithms train more effectively RNNs for sequence processing
than the original monotone methods. 相似文献
19.
LIU Meiqin School of Electrical Engineering Zhejiang University Hangzhou China 《中国科学F辑(英文版)》2006,49(2):137-154
The research on the theory and application of artificial neural networks has achieved a great success over the past two decades. Recently, increasing attention has been paid to recurrent neural networks, which are rich in dynamics, highly parallelizable, and easily implementable with VLSI. Due to these attractive features, RNNs have widely been applied to system identification, control, optimization and associative memories[1]. Stability analysis, which is critical to any applications of R… 相似文献