首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 343 毫秒
1.
重点研究进化回归神经网络对时序数据和关联数据的建模能力。针对两个标准问题,采用不同形式的建模数据,比较了前向网络和回归神经网络的建模及预测效果,进一步将进化算法用于不同结构回归神经网络的训练并比较了它们的建模能力。仿真结果表明回归神经网络对时序关联数据有很好的建模和预测能力,相比于前向网络,无需过程时序特点的先验知识,可以采用最简单的建模数据形式。而进化算法相比于常规的梯度下降算法,用于训练不同的回归网络结构通用性好,且训练过程不受局部极小问题的困扰,适当规模的训练过程可以获得性能良好的神经网络模型。  相似文献   

2.
提出用回归神经网络进行入口匝道控制的思路。阐述了Elman回归神经网络原理与入口匝道控制原理,选取上、下游时间占有率和车速作为匝道控制器的输入量,并设计了Elman回归神经网络入口匝道控制器,采用一种改进的算法对回归神经网络进行训练。仿真实验表明,该控制器学习误差小,泛化能力好,具有良好的应用前景。  相似文献   

3.
《软件》2019,(3):217-221
神经网络的广泛应用使得人们更加关注神经网络的训练,更高精度的要求给神经网络的训练带来了困难,因此加速神经网络的训练成为了研究的重点。对于神经网络的训练中卷积层占据了大部分的训练时间,所以加速卷积层的训练成为了加速神经网络的关键。本文提出了GFW加速调度算法,GFW算法通过对不同卷积图像的大小和卷积核的数量调用不同的卷积算法,以达到整体的最佳训练效果。实验中具体分析了9层卷积网络的加速训练,实验结果显示,相比于GEMM卷积算法,GFW算法实现了2.901倍的加速,相比于FFT算法GFW算法实现了1.467倍的加速,相比于Winograd算法,GFW算法实现了1.318倍的加速。  相似文献   

4.
改进的基于神经网络的非线性多元回归分析   总被引:3,自引:0,他引:3  
介绍了Levenberg-Marquardt算法来加速神经网络的训练过程,并且为了使得网络回归分析结果具有良好的泛化能力,在训练算法的目标函数中综合了网络权值因素。最后对所给出的算法进行了实例仿真,仿真结果表明该算法不仅具有较好的数据拟合精度,而且具有很好的泛化性能。  相似文献   

5.
针对BP神经网络收敛速度慢、易陷入局部极小的缺点,提出将改进的人工鱼群算法与BP算法相结合的混合算法训练人工神经网络,建立了相应的优化训练模型及训练过程.通过基于生物免疫机制改进的人工鱼群算法优化训练多层前向神经网络,使神经网络对训练初值和参数要求不高,扩大了权值的搜索空间,提高了收敛速度和学习精度,有效地协调全局和局部搜索能力.仿真结果表明,该算法性能优于其它算法,具有均方误差值小,收敛速度快和计算精度高等特点,是一种更有效的神经网络训练算法.  相似文献   

6.
基于催化原理的传感器在矿井环境工作时,受内在外在因素的干扰,测量的可燃气体浓度值存在误差过大的问题。设计了基于列文伯格-马夸尔特(L-M)训练算法及反向传播(BP)神经网络的传感器无效数据过滤器。通过离线采集传感器响应特性曲线数据的方式构建网络模型,并用Matlab工具对模型进行仿真训练。综合对比分析L-M训练算法、拟牛顿训练算法、自适应线性回归(LR)动量梯度下降训练算法的收敛速度和误差性能。对比结果表明,基于L-M训练算法构建的BP神经网络模型收敛速度更快、误差值更小、效率更高,有利于矿用催化原理传感器无效检测非线性数据的过滤。  相似文献   

7.
该文结合神经网络来研究城市轨道交通中短期客流预测问题。设计出了基于自回归神经网络的轨道交通客流预测模型、模型描述及其模型训练算法。通过matlab仿真实验来验证预测模型的性能,优于将最小二乘支持向量机与离散一维Daub4小波分析结合起来预测效果。  相似文献   

8.
该文结合神经网络来研究城市轨道交通中短期客流预测问题。设计出了基于自回归神经网络的轨道交通客流预测模型、模型描述及其模型训练算法。通过matlab仿真实验来验证预测模型的性能,优于将最小二乘支持向量机与离散一维Daub4小波分析结合起来预测效果。  相似文献   

9.
基于改进BP神经网络的预测模型及其应用   总被引:28,自引:7,他引:21  
对BP神经网络的结构及其训练算法进行了研究,并针对传统BP算法的缺陷,提出了一种采用L—M算法的改进BP神经网络。在此基础上建立了基于改进BP神经网络的非线性系统预测模型,并通过具体的仿真及实践结果验证了改进BP神经网络的有效性。  相似文献   

10.
为提升风光互补电站出力预测准确度,研究了风光互补电站的出力预测方法。分析了风光互补电站机组出力情况,建立了广义回归神经网络模型和径向基神经网络模型用于训练历史数据。提出改进动态组群合作优化求解算法,利用该算法对风光互补电站出力进行预测,并利用仿真分析论证了提出模型的有效性,说明提出的方法能够有效降低预测误差,改善预测精度。  相似文献   

11.
For a recurrent neural network (RNN), its transient response is a critical issue, especially for real-time signal processing applications. The conventional RNN training algorithms, such as backpropagation through time (BPTT) and real-time recurrent learning (RTRL), have not adequately addressed this problem because they suffer from low convergence speed. While increasing the learning rate may help to improve the performance of the RNN, it can result in unstable training in terms of weight divergence. Therefore, an optimal tradeoff between RNN training speed and weight convergence is desired. In this paper, a robust adaptive gradient-descent (RAGD) training algorithm of RNN is developed based on a novel RNN hybrid training concept. It switches the training patterns between standard real-time online backpropagation (BP) and RTRL according to the derived convergence and stability conditions. The weight convergence and $L_2$-stability of the algorithm are derived via the conic sector theorem. The optimized adaptive learning maximizes the training speed of the RNN for each weight update without violating the stability and convergence criteria. Computer simulations are carried out to demonstrate the applicability of the theoretical results.   相似文献   

12.
This paper proposes a hybrid optimization algorithm which combines the efforts of local search (individual learning) and cellular genetic algorithms (GA) for training recurrent neural nets (RNN). Each RNN weight is encoded as a floating point number, and a concatenation of numbers forms a chromosome. Reproduction takes place locally in a square grid, each grid point representing a chromosome. Lamarckian and Baldwinian (1896) mechanisms for combining cellular GA and learning are compared. Different hill-climbing algorithms are incorporated into the cellular GA. These include the real-time recurrent learning (RTRL) and its simplified versions, and the delta rule. RTRL has been successively simplified by freezing some of the weights to form simplified versions. The delta rule, the simplest form of learning, has been implemented by considering the RNN as feedforward networks. The hybrid algorithms are used to train the RNN to solve a long-term dependency problem. The results show that Baldwinian learning is inefficient in assisting the cellular GA. It is conjectured that the more difficult it is for genetic operations to produce the genotypic changes that match the phenotypic changes due to learning, the poorer is the convergence of Baldwinian learning. Most of the combinations using the Lamarckian mechanism show an improvement in reducing the number of generations for an optimum network; however, only a few can reduce the actual time taken. Embedding the delta rule in the cellular GA is the fastest method. Learning should not be too extensive.  相似文献   

13.
The Random Neural Network (RNN) has received, since its inception in 1989, considerable attention and has been successfully used in a number of applications. In this critical review paper we focus on the feed-forward RNN model and its ability to solve classification problems. In particular, we paid special attention to the RNN literature related with learning algorithms that discover the RNN interconnection weights, suggested other potential algorithms that can be used to find the RNN interconnection weights, and compared the RNN model with other neural-network based and non-neural network based classifier models. In review, the extensive literature review and experimentation with the RNN feed-forward model provided us with the necessary guidance to introduce six critical review comments that identify some gaps in the RNN’s related literature and suggest directions for future research.  相似文献   

14.
二维空间中基于约束关系的RNN查询算法   总被引:1,自引:0,他引:1  
反最近邻(RNN)查询问题是空间数据库中的研究热点问题,但传统算法主要集中在对整个数据集的查询。该文把约束关系的概念引入到了RNN查询中,给出在约束关系下如何利用索引结构进行查询的方法,并根据NN查询和RNN查询问题的内在联系给出相应求解CRNN问题的算法。实验表明该算法比传统算法更能提高查询效率。  相似文献   

15.
Automated, real-time, and reliable equipment activity recognition on construction sites can help to minimize idle time, improve operational efficiency, and reduce emissions. Previous efforts in activity recognition of construction equipment have explored different classification algorithms anm accelerometers and gyroscopes. These studies utilized pattern recognition approaches such as statistical models (e.g., hidden-Markov models); shallow neural networks (e.g., Artificial Neural Networks); and distance algorithms (e.g., K-nearest neighbor) to classify the time-series data collected from sensors mounted on the equipment. Such methods necessitate the segmentation of continuous operational data with fixed or dynamic windows to extract statistical features. This heuristic and manual feature extraction process is limited by human knowledge and can only extract human-specified shallow features. However, recent developments in deep neural networks, specifically recurrent neural network (RNN), presents new opportunities to classify sequential time-series data with recurrent lateral connections. RNN can automatically learn high-level representative features through the network instead of being manually designed, making it more suitable for complex activity recognition. However, the application of RNN requires a large training dataset which poses a practical challenge to obtain from real construction sites. Thus, this study presents a data-augmentation framework for generating synthetic time-series training data for an RNN-based deep learning network to accurately and reliably recognize equipment activities. The proposed methodology is validated by generating synthetic data from sample datasets, that were collected from two earthmoving operations in the real world. The synthetic data along with the collected data were used to train a long short-term memory (LSTM)-based RNN. The trained model was evaluated by comparing its performance with traditionally used classification algorithms for construction equipment activity recognition. The deep learning framework presented in this study outperformed the traditionally used machine learning classification algorithms for activity recognition regarding model accuracy and generalization.  相似文献   

16.
Training of recurrent neural networks (RNNs) introduces considerable computational complexities due to the need for gradient evaluations. How to get fast convergence speed and low computational complexity remains a challenging and open topic. Besides, the transient response of learning process of RNNs is a critical issue, especially for online applications. Conventional RNN training algorithms such as the backpropagation through time and real-time recurrent learning have not adequately satisfied these requirements because they often suffer from slow convergence speed. If a large learning rate is chosen to improve performance, the training process may become unstable in terms of weight divergence. In this paper, a novel training algorithm of RNN, named robust recurrent simultaneous perturbation stochastic approximation (RRSPSA), is developed with a specially designed recurrent hybrid adaptive parameter and adaptive learning rates. RRSPSA is a powerful novel twin-engine simultaneous perturbation stochastic approximation (SPSA) type of RNN training algorithm. It utilizes three specially designed adaptive parameters to maximize training speed for a recurrent training signal while exhibiting certain weight convergence properties with only two objective function measurements as the original SPSA algorithm. The RRSPSA is proved with guaranteed weight convergence and system stability in the sense of Lyapunov function. Computer simulations were carried out to demonstrate applicability of the theoretical results.  相似文献   

17.
Reverse Nearest Neighbors Search in Ad Hoc Subspaces   总被引:1,自引:0,他引:1  
Given an object q, modeled by a multidimensional point, a reverse nearest neighbors (RNN) query returns the set of objects in the database that have q as their nearest neighbor. In this paper, we study an interesting generalization of the RNN query, where not all dimensions are considered, but only an ad hoc subset thereof. The rationale is that 1) the dimensionality might be too high for the result of a regular RNN query to be useful, 2) missing values may implicitly define a meaningful subspace for RNN retrieval, and 3) analysts may be interested in the query results only for a set of (ad hoc) problem dimensions (i.e., object attributes). We consider a suitable storage scheme and develop appropriate algorithms for projected RNN queries, without relying on multidimensional indexes. Given the significant cost difference between random and sequential data accesses, our algorithms are based on applying sequential accesses only on the projected atomic values of the data at each dimension, to progressively derive a set of RNN candidates. Whether these candidates are actual RNN results is then validated via an optimized refinement step. In addition, we study variants of the projected RNN problem, including RkNN search, bichromatic RNN, and RNN retrieval for the case where sequential accesses are not possible. Our methods are experimentally evaluated with real and synthetic data  相似文献   

18.
This paper presents several aspects with regards the application of the NARX model and Recurrent Neural Network (RNN) model in system identification and control. We show that every RNN can be transformed to a first order NARX model, and vice versa, under the condition that the neuron transfer function is similar to the NARX transfer function. If the neuron transfer function is piecewise linear, that is f(x):=x if uxu , 1 and f(x):=sign(x) otherwise, we further show that every NARX model of order larger than one can be transformed into a RNN. According to these equivalence results, there are three advantages from which we can benefit: (i) if the output dimension of a NARX model is larger than the number of its hidden unit, training an equivalent RNN will be faster, i.e. the equivalent RNN is trained instead of the NARX model. Once the training is finished, the RNN is transformed back to an equivalent NARX model. On the other hand, (ii) if the output dimension of a RNN model is less than the number of its hidden units, the training of a RNN can be speeded up by using a similar method; (iii) the RNN pruning can be accomplished in a much simpler way, i.e. the equivalent NARX model is pruned instead of the RNN. After pruning, the NARX model is transformed back to the equivalent RNN.  相似文献   

19.
Decision feedback recurrent neural equalization with fast convergence rate   总被引:1,自引:0,他引:1  
Real-time recurrent learning (RTRL), commonly employed for training a fully connected recurrent neural network (RNN), has a drawback of slow convergence rate. In the light of this deficiency, a decision feedback recurrent neural equalizer (DFRNE) using the RTRL requires long training sequences to achieve good performance. In this paper, extended Kalman filter (EKF) algorithms based on the RTRL for the DFRNE are presented in state-space formulation of the system, in particular for complex-valued signal processing. The main features of global EKF and decoupled EKF algorithms are fast convergence and good tracking performance. Through nonlinear channel equalization, performance of the DFRNE with the EKF algorithms is evaluated and compared with that of the DFRNE with the RTRL.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号