首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8篇
  免费   0篇
工业技术   8篇
  2000年   1篇
  1999年   2篇
  1998年   3篇
  1997年   1篇
  1995年   1篇
排序方式: 共有8条查询结果,搜索用时 15 毫秒
1
1.
The problem of weight initialization in multilayer perceptron networks is considered. A new computationally simple weight initialization method based on the usage of reference patterns is presented. A reference pattern is a vector which is used to represent data points that fall in its vicinity in the data space. On one hand, the proposed method aims to set the initial weight values to be such that inputs to network nodes are within the active region (in other words, nodes are not saturated). On the other hand, the goal is to distribute the discriminant functions formed by the hidden units evenly into the input space area where training data is located. The proposed method is tested with the widely used two-spirals classification benchmark problem and channel equalization problem where several alternatives for obtaining suitable reference patterns are investigated. Also, the effect of the initialization is studied when two commonly used cost functions are used in the training phase. These are the mean square error and relative entropy cost functions. A comparison with the conventional random initialization shows that significant improvement in convergence can be achieved with the proposed method. In addition, the computational cost of the initialization was found to be negligible compared with the cost of training.  相似文献   
2.
Weight initialization in cascade-correlation learning is considered. Most of the previous studies use the so called candidate training to deal with the initialization problem in the cascade-correlation learning. There several candidate hidden units are first trained, and then the one yielding the best value for the covariance criterion is installed to the network. In case there are many candidate units to be trained, the total computational cost of the training can become very large. Here we consider a new approach for weight initialization in cascade-correlation learning. The proposed method is based on the concept of stepwise regression. Empirical simulations show that the new method can significantly speed-up cascade-correlation learning compared to the case where the candidate training is used. Moreover, the overall performance remained similar or was even better than with the candidate training.  相似文献   
3.
Constructive Backpropagation for Recurrent Networks   总被引:1,自引:0,他引:1  
Choosing a network size is a difficult problem in neural network modelling. In many recent studies, constructive or destructive methods that add or delete connections, neurons or layers have been studied in order to solve this problem. In this work we consider the constructive approach, which is in many cases a very computationally efficient approach. In particular, we address the construction of recurrent networks by the use of constructive backpropagation. The benefits of the proposed scheme are firstly that fully recurrent networks with an arbitrary number of layers can be constructed efficiently. Secondly, after the network has been constructed we can continue the adaptation of the network weights as well as we can of its structure. This includes both addition and deletion of neurons/layers in a computationally efficient manner. Thus, the investigated method is very flexible compared to many previous methods. In addition, according to our time series prediction experiments, the proposed method is competitive in terms of modelling performance and training time compared to the well-known recurrent cascade-correlation method.  相似文献   
4.
Modified cascade-correlation learning for classification   总被引:2,自引:0,他引:2  
The main advantages of cascade-correlation learning are the abilities to learn quickly and to determine the network size. However, recent studies have shown that in many problems the generalization performance of a cascade-correlation trained network may not be quite optimal. Moreover, to reach a certain performance level, a larger network may be required than with other training methods. Recent advances in statistical learning theory emphasize the importance of a learning method to be able to learn optimal hyperplanes. This has led to advanced learning methods, which have demonstrated substantial performance improvements. Based on these recent advances in statistical learning theory, we introduce modifications to the standard cascade-correlation learning that take into account the optimal hyperplane constraints. Experimental results demonstrate that with modified cascade correlation, considerable performance gains are obtained compared to the standard cascade-correlation learning. This includes better generalization, smaller network size, and faster learning.  相似文献   
5.
Short term electric load forecasting with a neural network based on fuzzy rules is presented. In this network, fuzzy membership functions are represented using combinations of two sigmoid functions. A new scheme for augmenting the rule base is proposed. The network employs outdoor temperature forecast as one of the input quantities. The influence of imprecision in this quantity is investigated. The model is shown to be capable of also making reasonable forecasts in exceptional weekdays. Forecasting simulations were made with three different time series of electric load. In addition, the neuro-fuzzy method was tested at two electricity works, where it was used to produce forecasts with 1–24 hour lead times. The results of these one month real world tests are represented. Comparative forecasts were also made with the conventional Holt-Winters exponential smoothing method. The main result of the study is that the neuro-fuzzy method requires stationarity from the time series with respect to training data in order to give clearly better forecasts than the Holt-Winters method.  相似文献   
6.
In this study we investigate a hybrid neural network architecture for modelling purposes. The proposed network is based on the multilayer perceptron (MLP) network. However, in addition to the usual hidden layers the first hidden layer is selected to be a centroid layer. Each unit in this new layer incorporates a centroid that is located somewhere in the input space. The output of these units is the Euclidean distance between the centroid and the input. The centroid layer clearly resembles the hidden layer of the radial basis function (RBF) networks. Therefore the centroid based multilayer perceptron (CMLP) networks can be regarded as a hybrid of MLP and RBF networks. The presented benchmark experiments show that the proposed hybrid architecture is able to combine the good properties of MLP and RBF networks resulting fast and efficient learning, and compact network structure.  相似文献   
7.
To avoid oversized feedforward networks we propose that after Cascade-Correlation learning the network is fine-tuned with backpropagation algorithm. Our experiments show that if one uses merely Cascade-Correlation learning the network may require a large number of hidden units to reach the desired error level. However, if the network is in addition fine-tuned with backpropagation method then the desired error level can be reached with much smaller number of hidden units. It is also shown that the combined Cascade-Correlation backpropagation training is a faster scheme compared to mere backpropagation training.  相似文献   
8.
Automated detection of different waveforms in physiological signals has been one of the most intensively studied applications of signal processing in the clinical medicine. During recent years an increasing amount of neural network based methods have been proposed. In this paper we present a radial basis function (RBF) network based method for automated detection of different interference waveforms in epileptic EEG. This kind of artefact detector is especially useful as a preprocessing system in combination with different kinds of automated EEG analyzers to improve their general applicability. The results show that our neural network based classifier successfully detects artefacts at the rate of over 75% while the correct classification rate for normal segments is as high as about 95%.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号