首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
针对生物地理学优化训练多层感知器存在的早熟收敛以及初始化灵敏等问题,提出一种基于差分进化生物地理学优化的多层感知器训练方法。将生物地理学优化(Biogeography-based Optimization,BBO)与差分进化(Differential Evolution,DE)算法相结合,形成改进的混合DE_BBO算法;采用改进的DE_BBO来训练多层感知器(Multi-Layer Perceptron,MLP),并应用于虹膜、乳腺癌、输血、钞票验证等4类数据分类。与BBO、PSO、GA、ACO、ES、PBIL等6种主流启发式算法的实验结果进行比较表明,DE_BBO_MLP算法在分类精度和收敛速度等方面优于已有方法。  相似文献   

2.

Water cycle algorithm (WCA) is a new population-based meta-heuristic technique. It is originally inspired by idealized hydrological cycle observed in natural environment. The conventional WCA is capable to demonstrate a superior performance compared to other well-established techniques in solving constrained and also unconstrained problems. Similar to other meta-heuristics, premature convergence to local optima may still be happened in dealing with some specific optimization tasks. Similar to chaos in real water cycle behavior, this article incorporates chaotic patterns into stochastic processes of WCA to improve the performance of conventional algorithm and to mitigate its premature convergence problem. First, different chaotic signal functions along with various chaotic-enhanced WCA strategies (totally 39 meta-heuristics) are implemented, and the best signal is preferred as the most appropriate chaotic technique for modification of WCA. Second, the chaotic algorithm is employed to tackle various benchmark problems published in the specialized literature and also training of neural networks. The comparative statistical results of new technique vividly demonstrate that premature convergence problem is relieved significantly. Chaotic WCA with sinusoidal map and chaotic-enhanced operators not only can exploit high-quality solutions efficiently but can outperform WCA optimizer and other investigated algorithms.

  相似文献   

3.
This paper proposes a framework for constructing and training radial basis function (RBF) neural networks. The proposed growing radial basis function (GRBF) network begins with a small number of prototypes, which determine the locations of radial basis functions. In the process of training, the GRBF network gross by splitting one of the prototypes at each growing cycle. Two splitting criteria are proposed to determine which prototype to split in each growing cycle. The proposed hybrid learning scheme provides a framework for incorporating existing algorithms in the training of GRBF networks. These include unsupervised algorithms for clustering and learning vector quantization, as well as learning algorithms for training single-layer linear neural networks. A supervised learning scheme based on the minimization of the localized class-conditional variance is also proposed and tested. GRBF neural networks are evaluated and tested on a variety of data sets with very satisfactory results.  相似文献   

4.
Presents a systematic approach for constructing reformulated radial basis function (RBF) neural networks, which was developed to facilitate their training by supervised learning algorithms based on gradient descent. This approach reduces the construction of radial basis function models to the selection of admissible generator functions. The selection of generator functions relies on the concept of the blind spot, which is introduced in the paper. The paper also introduces a new family of reformulated radial basis function neural networks, which are referred to as cosine radial basis functions. Cosine radial basis functions are constructed by linear generator functions of a special form and their use as similarity measures in radial basis function models is justified by their geometric interpretation. A set of experiments on a variety of datasets indicate that cosine radial basis functions outperform considerably conventional radial basis function neural networks with Gaussian radial basis functions. Cosine radial basis functions are also strong competitors to existing reformulated radial basis function models trained by gradient descent and feedforward neural networks with sigmoid hidden units.  相似文献   

5.
多层感知器MLP是处理分类问题的一种方法,可实现非线性高维度分类,并有很好的扩展能力.但是,在传统MLP的训练过程中,MLP分类结果的好坏与参数选择关系密切,而且传统算法的参数选择有很多缺陷.使用群智能算法替代传统多层感知器训练器是一种解决方案.灰狼优化算法GWO是其中一种兼顾高水平的探索和开发能力的算法.但是,GWO算法训练MLP时,依然存在开发和探索不平衡的问题,导致M LP分类准确率不理想.为了提升算法探索能力,将柯西变异算子引入灰狼优化算法,同时平衡开发能力,加入余弦收敛因子,提出一种改进的柯西变异灰狼优化算法IGWO.最后,将改进后的算法作为MLP的训练器,用于对3个不同复杂度分类问题进行分类实验,检验训练器在不同结构MLP下的性能表现.结果表明:相较于其他对比算法,IGWO训练MLP在分类准确率、陷入局部最优抗性、全局收敛速度和稳定性方面均具有较好的性能.  相似文献   

6.
灰狼优化算法(GWO)是目前一种比较新颖的群智能优化算法,具有收敛速度快,寻优能力强等优点。本文将灰狼优化算法用于求解复杂的作业车间调度问题,与布谷鸟搜索算法进行比较研究,验证了标准GWO算法求解经典作业车间调度问题的可行性和有效性。在此基础上,针对复杂作业车间调度问题难以求解的特点,对标准GWO算法进行改进,通过进化种群动态、反向学习初始化种群,以及最优个体变异等三个方面的改进操作,测试结果表明改进后的混合灰狼优化算法能够有效跳出局部最优值,找到更好的解,并且结果鲁棒性更强。  相似文献   

7.
This paper proposes a novel high-order associative memory system (AMS) based on the discrete Taylor series (DTS). The mathematical foundation for the new AMS scheme is derived, three training algorithms are proposed, and the convergence of learning is proved. The DTS-AMS thus developed is capable of implementing error-free approximation to multivariable polynomial functions of arbitrary order. Compared with cerebellar model articulation controllers and radial basis function neural networks, it provides higher learning precision and less memory request. Furthermore, it offers less training computation and faster convergence rate than that attainable by multilayer perceptron. Numerical simulations show that the proposed DTS-AMS is effective in higher order function approximation and has potential in practical applications.  相似文献   

8.
This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.  相似文献   

9.
This work is a seminal attempt to address the drawbacks of the recently proposed monarch butterfly optimization (MBO) algorithm. This algorithm suffers from premature convergence, which makes it less suitable for solving real-world problems. The position updating of MBO is modified to involve previous solutions in addition to the best solution obtained thus far. To prove the efficiency of the Improved MBO (IMBO), a set of 23 well-known test functions is employed. The statistical results show that IMBO benefits from high local optima avoidance and fast convergence speed which helps this algorithm to outperform basic MBO and another recent variant of this algorithm called greedy strategy and self-adaptive crossover operator MBO (GCMBO). The results of the proposed algorithm are compared with nine other approaches in the literature for verification. The comparative analysis shows that IMBO provides very competitive results and tends to outperform current algorithms. To demonstrate the applicability of IMBO at solving challenging practical problems, it is also employed to train neural networks as well. The IMBO-based trainer is tested on 15 popular classification datasets obtained from the University of California at Irvine (UCI) Machine Learning Repository. The results are compared to a variety of techniques in the literature including the original MBO and GCMBO. It is observed that IMBO improves the learning of neural networks significantly, proving the merits of this algorithm for solving challenging problems.  相似文献   

10.
This paper introduces ANASA (adaptive neural algorithm of stochastic activation), a new, efficient, reinforcement learning algorithm for training neural units and networks with continuous output. The proposed method employs concepts, found in self-organizing neural networks theory and in reinforcement estimator learning algorithms, to extract and exploit information relative to previous input pattern presentations. In addition, it uses an adaptive learning rate function and a self-adjusting stochastic activation to accelerate the learning process. A form of optimal performance of the ANASA algorithm is proved (under a set of assumptions) via strong convergence theorems and concepts. Experimentally, the new algorithm yields results, which are superior compared to existing associative reinforcement learning methods in terms of accuracy and convergence rates. The rapid convergence rate of ANASA is demonstrated in a simple learning task, when it is used as a single neural unit, and in mathematical function modeling problems, when it is used to train various multilayered neural networks.  相似文献   

11.
The present paper proposes a new stochastic optimization algorithm as a hybridization of a relatively recent stochastic optimization algorithm, called biogeography-based optimization (BBO) with the differential evolution (DE) algorithm. This combination incorporates DE algorithm into the optimization procedure of BBO with an attempt to incorporate diversity to overcome stagnation at local optima. We also propose to implement an additional selection procedure for BBO, which preserves fitter habitats for subsequent generations. The proposed variation of BBO, named DBBO, is tested for several benchmark function optimization problems. The results show that DBBO can significantly outperform the basic BBO algorithm and can mostly emerge as the best solution providing algorithm among competing BBO and DE algorithms.  相似文献   

12.
Some recent research reports that a dendritic neuron model (DNM) can achieve better performance than traditional artificial neuron networks (ANNs) on classification, prediction, and other problems when its parameters are well-tuned by a learning algorithm. However, the back-propagation algorithm (BP), as a mostly used learning algorithm, intrinsically suffers from defects of slow convergence and easily dropping into local minima. Therefore, more and more research adopts non-BP learning algorithms to train ANNs. In this paper, a dynamic scale-free network-based differential evolution (DSNDE) is developed by considering the demands of convergent speed and the ability to jump out of local minima. The performance of a DSNDE trained DNM is tested on 14 benchmark datasets and a photovoltaic power forecasting problem. Nine meta-heuristic algorithms are applied into comparison, including the champion of the 2017 IEEE Congress on Evolutionary Computation (CEC2017) benchmark competition effective butterfly optimizer with covariance matrix adapted retreat phase (EBOwithCMAR). The experimental results reveal that DSNDE achieves better performance than its peers.   相似文献   

13.
The biogeography-based optimisation (BBO) algorithm is a novel evolutionary algorithm inspired by biogeography. Similarly, to other evolutionary algorithms, entrapment in local optima and slow convergence speed are two probable problems it encounters in solving challenging real problems. Due to the novelty of this algorithm, however, there is little in the literature regarding alleviating these two problems. Chaotic maps are one of the best methods to improve the performance of evolutionary algorithms in terms of both local optima avoidance and convergence speed. In this study, we utilise ten chaotic maps to enhance the performance of the BBO algorithm. The chaotic maps are employed to define selection, emigration, and mutation probabilities. The proposed chaotic BBO algorithms are benchmarked on ten test functions. The results demonstrate that the chaotic maps (especially Gauss/mouse map) are able to significantly boost the performance of BBO. In addition, the results show that the combination of chaotic selection and emigration operators results in the highest performance.  相似文献   

14.
This paper presents an axiomatic approach for constructing radial basis function (RBF) neural networks. This approach results in a broad variety of admissible RBF models, including those employing Gaussian RBFs. The form of the RBFs is determined by a generator function. New RBF models can be developed according to the proposed approach by selecting generator functions other than exponential ones, which lead to Gaussian RBFs. This paper also proposes a supervised learning algorithm based on gradient descent for training reformulated RBF neural networks constructed using the proposed approach. A sensitivity analysis of the proposed algorithm relates the properties of RBFs with the convergence of gradient descent learning. Experiments involving a variety of reformulated RBF networks generated by linear and exponential generator functions indicate that gradient descent learning is simple, easily implementable, and produces RBF networks that perform considerably better than conventional RBF models trained by existing algorithms  相似文献   

15.
通过分析生物地理学优化算法(BBO)性能的不足,提出了一种基于混合凸迁移和趋优柯西变异的对偶生物地理学优化算法(DuBBO).在迁移算子中,采用动态的混合凸迁移算子,使算法能够快速地向最优解方向收敛;在变异机制中,采用趋优变异策略,并加入了柯西分布随机数帮助算法跳出局部最优解;最后将对偶学习策略集成到算法中,加快了算法收敛速度并提升了搜索能力.在23个benchmark函数上的实验结果证明了提出的三种改进策略的有效性和必要性.最后将DuBBO与BBO以及另外六种优秀的改进算法进行对比.实验结果表明,DuBBO在整体性能上最好、收敛速度更快、收敛精度更高.  相似文献   

16.
Training of recurrent neural networks (RNNs) introduces considerable computational complexities due to the need for gradient evaluations. How to get fast convergence speed and low computational complexity remains a challenging and open topic. Besides, the transient response of learning process of RNNs is a critical issue, especially for online applications. Conventional RNN training algorithms such as the backpropagation through time and real-time recurrent learning have not adequately satisfied these requirements because they often suffer from slow convergence speed. If a large learning rate is chosen to improve performance, the training process may become unstable in terms of weight divergence. In this paper, a novel training algorithm of RNN, named robust recurrent simultaneous perturbation stochastic approximation (RRSPSA), is developed with a specially designed recurrent hybrid adaptive parameter and adaptive learning rates. RRSPSA is a powerful novel twin-engine simultaneous perturbation stochastic approximation (SPSA) type of RNN training algorithm. It utilizes three specially designed adaptive parameters to maximize training speed for a recurrent training signal while exhibiting certain weight convergence properties with only two objective function measurements as the original SPSA algorithm. The RRSPSA is proved with guaranteed weight convergence and system stability in the sense of Lyapunov function. Computer simulations were carried out to demonstrate applicability of the theoretical results.  相似文献   

17.
Stochastic learning automata and genetic algorithms (GAs) have previously been shown to have valuable global optimization properties. Learning automata have, however, been criticized for having a relatively slow rate of convergence. In this paper, these two techniques are combined to provide an increase in the rate of convergence for the learning automata and also to improve the chances of escaping local optima. The technique separates the genotype and phenotype properties of the GA and has the advantage that the degree of convergence can be quickly ascertained. It also provides the GA with a stopping rule. If the technique is applied to real-valued function optimization problems, then bounds on the range of the values within which the global optima is expected can be determined throughout the search process. The technique is demonstrated through a number of bit-based and real-valued function optimization examples.  相似文献   

18.
Due to its simplicity and ease of use, the standard grey wolf optimizer (GWO) is attracting much attention. However, due to its imperfect search structure and possible risk of being trapped in local optima, its application has been limited. To perfect the performance of the algorithm, an optimized GWO is proposed based on a mutation operator and eliminating-reconstructing mechanism (MR-GWO). By analyzing GWO, it is found that it conducts search with only three leading wolves at the core, and balances the exploration and exploitation abilities by adjusting only the parameter a, which means the wolves lose some diversity to some extent. Therefore, a mutation operator is introduced to facilitate better searching wolves, and an eliminating- reconstructing mechanism is used for the poor search wolves, which not only effectively expands the stochastic search, but also accelerates its convergence, and these two operations complement each other well. To verify its validity, MR-GWO is applied to the global optimization experiment of 13 standard continuous functions and a radial basis function (RBF) network approximation experiment. Through a comparison with other algorithms, it is proven that MR-GWO has a strong advantage.  相似文献   

19.
聚类分析是数据挖掘的重要任务之一,而具有易早熟与收敛速度慢等缺陷的传统生物地理优化算法(Biogeography-Based Optimization, BBO)很难满足具有NP(Non-deterministic Polynomial)性质的复杂聚类问题需求,于是提出了一种基于混合生物地理学优化的聚类算法(Mixed Biogeography-Based Optimization, MBBO),该算法构造了一个基于梯度下降局部最优贪心搜索的新迁移算子,以聚类目标函数值作为个体适应度进行数据集内隐簇结构寻优。通过在4个标准数据集(Iris、Wine、Glass与Diabetes)的实验,结果表明MBBO算法相对于传统的优化算法具有更好的优化能力和收敛度,能发现更高质量的簇结构模式。  相似文献   

20.

The paper observes a similarity between the stochastic optimal control of discrete dynamical systems and the learning multilayer neural networks. It focuses on contemporary deep networks with nonconvex nonsmooth loss and activation functions. The machine learning problems are treated as nonconvex nonsmooth stochastic optimization problems. As a model of nonsmooth nonconvex dependences, the so-called generalized-differentiable functions are used. The backpropagation method for calculating stochastic generalized gradients of the learning quality functional for such systems is substantiated basing on Hamilton–Pontryagin formalism. Stochastic generalized gradient learning algorithms are extended for training nonconvex nonsmooth neural networks. The performance of a stochastic generalized gradient algorithm is illustrated by the linear multiclass classification problem.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号