首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
大红斑蝶优化算法(MBO)是最近提出的一种新的群智能优化算法。然而,该算法仍存在收敛速度较慢、易陷入局部最优的缺点。为克服MBO算法之不足,提出了一种改进的大红斑蝶优化算法(IMBO)。该算法采用将群体动态随机分割成两个子群体的策略,不同子群体中的大红斑蝶采用不同的搜索方法,以保持种群搜索的多样性。通过10个基准函数的仿真实验并与MBO算法以及标准PSO算法相比较,结果表明IMBO算法的全局搜索能力有了明显的提高,在函数优化中具有更好的收敛速度及稳定性。  相似文献   

2.

Water cycle algorithm (WCA) is a new population-based meta-heuristic technique. It is originally inspired by idealized hydrological cycle observed in natural environment. The conventional WCA is capable to demonstrate a superior performance compared to other well-established techniques in solving constrained and also unconstrained problems. Similar to other meta-heuristics, premature convergence to local optima may still be happened in dealing with some specific optimization tasks. Similar to chaos in real water cycle behavior, this article incorporates chaotic patterns into stochastic processes of WCA to improve the performance of conventional algorithm and to mitigate its premature convergence problem. First, different chaotic signal functions along with various chaotic-enhanced WCA strategies (totally 39 meta-heuristics) are implemented, and the best signal is preferred as the most appropriate chaotic technique for modification of WCA. Second, the chaotic algorithm is employed to tackle various benchmark problems published in the specialized literature and also training of neural networks. The comparative statistical results of new technique vividly demonstrate that premature convergence problem is relieved significantly. Chaotic WCA with sinusoidal map and chaotic-enhanced operators not only can exploit high-quality solutions efficiently but can outperform WCA optimizer and other investigated algorithms.

  相似文献   

3.

Training artificial neural networks is considered as one of the most challenging machine learning problems. This is mainly due to the presence of a large number of solutions and changes in the search space for different datasets. Conventional training techniques mostly suffer from local optima stagnation and degraded convergence, which make them impractical for datasets with many features. The literature shows that stochastic population-based optimization techniques suit this problem better and are reliably alternative because of high local optima avoidance and flexibility. For the first time, this work proposes a new learning mechanism for radial basis function networks based on biogeography-based optimizer as one of the most well-regarded optimizers in the literature. To prove the efficacy of the proposed methodology, it is employed to solve 12 well-known datasets and compared to 11 current training algorithms including gradient-based and stochastic approaches. The paper considers changing the number of neurons and investigating the performance of algorithms on radial basis function networks with different number of parameters as well. A statistical test is also conducted to judge about the significance of the results. The results show that the biogeography-based optimizer trainer is able to substantially outperform the current training algorithms on all datasets in terms of classification accuracy, speed of convergence, and entrapment in local optima. In addition, the comparison of trainers on radial basis function networks with different neurons size reveal that the biogeography-based optimizer trainer is able to train radial basis function networks with different number of structural parameters effectively.

  相似文献   

4.
Learning and convergence analysis of neural-type structurednetworks   总被引:6,自引:0,他引:6  
A class of feedforward neural networks, structured networks, has recently been introduced as a method for solving matrix algebra problems in an inherently parallel formulation. A convergence analysis for the training of structured networks is presented. Since the learning techniques used in structured networks are also employed in the training of neural networks, the issue of convergence is discussed not only from a numerical algebra perspective but also as a means of deriving insight into connectionist learning. Bounds on the learning rate are developed under which exponential convergence of the weights to their correct values is proved for a class of matrix algebra problems that includes linear equation solving, matrix inversion, and Lyapunov equation solving. For a special class of problems, the orthogonalized back-propagation algorithm, an optimal recursive update law for minimizing a least-squares cost functional, is introduced. It guarantees exact convergence in one epoch. Several learning issues are investigated.  相似文献   

5.
一类求解最大独立集问题的混合神经演化算法   总被引:5,自引:0,他引:5  
李有梅  徐宗本  孙建永 《计算机学报》2003,26(11):1538-1545
提出一类求解最大独立集问题(MIS)的混合型神经演化算法.该算法基于空间剖分与“排除”策略,有效综合了神经网络快速收敛及遗传算法稳健全局搜索的特别优点.与标准遗传算法和神经网络算法相比,该算法显示了极高的全局优化性态与计算效率.  相似文献   

6.
The biogeography-based optimisation (BBO) algorithm is a novel evolutionary algorithm inspired by biogeography. Similarly, to other evolutionary algorithms, entrapment in local optima and slow convergence speed are two probable problems it encounters in solving challenging real problems. Due to the novelty of this algorithm, however, there is little in the literature regarding alleviating these two problems. Chaotic maps are one of the best methods to improve the performance of evolutionary algorithms in terms of both local optima avoidance and convergence speed. In this study, we utilise ten chaotic maps to enhance the performance of the BBO algorithm. The chaotic maps are employed to define selection, emigration, and mutation probabilities. The proposed chaotic BBO algorithms are benchmarked on ten test functions. The results demonstrate that the chaotic maps (especially Gauss/mouse map) are able to significantly boost the performance of BBO. In addition, the results show that the combination of chaotic selection and emigration operators results in the highest performance.  相似文献   

7.
This paper presents a recurrent neural network for solving nonconvex nonlinear optimization problems subject to nonlinear inequality constraints. First, the p-power transformation is exploited for local convexification of the Lagrangian function in nonconvex nonlinear optimization problem. Next, the proposed neural network is constructed based on the Karush–Kuhn–Tucker (KKT) optimality conditions and the projection function. An important property of this neural network is that its equilibrium point corresponds to the optimal solution of the original problem. By utilizing an appropriate Lyapunov function, it is shown that the proposed neural network is stable in the sense of Lyapunov and convergent to the global optimal solution of the original problem. Also, the sensitivity of the convergence is analysed by changing the scaling factors. Compared with other existing neural networks for such problem, the proposed neural network has more advantages such as high accuracy of the obtained solutions, fast convergence, and low complexity. Finally, simulation results are provided to show the benefits of the proposed model, which compare to or outperform existing models.  相似文献   

8.
神经网络增强学习的梯度算法研究   总被引:11,自引:1,他引:11  
徐昕  贺汉根 《计算机学报》2003,26(2):227-233
针对具有连续状态和离散行为空间的Markov决策问题,提出了一种新的采用多层前馈神经网络进行值函数逼近的梯度下降增强学习算法,该算法采用了近似贪心且连续可微的Boltzmann分布行为选择策略,通过极小化具有非平稳行为策略的Bellman残差平方和性能指标,以实现对Markov决策过程最优值函数的逼近,对算法的收敛性和近似最优策略的性能进行了理论分析,通过Mountain-Car学习控制问题的仿真研究进一步验证了算法的学习效率和泛化性能。  相似文献   

9.
针对蝴蝶优化(monarch butterfly optimization,MBO)算法易陷入局部最优和收敛速度慢等问题,提出了一种基于改进的交叉迁移和共享调整的蝴蝶优化(MBO with cross migration and sharing adjustment,CSMBO)算法。首先,利用基于维度的垂直交叉操作来替换标准MBO算法的迁移算子,形成交叉迁移算子,有效提升其搜索能力;其次,将原始调整算子改为具有信息分享功能的共享调整算子,以加快算法的收敛速度;最后,采用贪婪选择策略取代标准MBO算法中的精英保留策略,减少一次排序操作进而提高其计算效率。为了验证CSMBO算法的优化能力,测试了其在30维和50维函数上的优化,并与三种优化算法进行比较,其实验结果表明CSMBO算法具有良好的优化性能。  相似文献   

10.
In this paper, a new recurrent neural network is proposed for solving convex quadratic programming (QP) problems. Compared with existing neural networks, the proposed one features global convergence property under weak conditions, low structural complexity, and no calculation of matrix inverse. It serves as a competitive alternative in the neural network family for solving linear or quadratic programming problems. In addition, it is found that by some variable substitution, the proposed network turns out to be an existing model for solving minimax problems. In this sense, it can be also viewed as a special case of the minimax neural network. Based on this scheme, a k-winners-take-all (k-WTA) network with O(n) complexity is designed, which is characterized by simple structure, global convergence, and capability to deal with some ill cases. Numerical simulations are provided to validate the theoretical results obtained. More importantly, the network design method proposed in this paper has great potential to inspire other competitive inventions along the same line.  相似文献   

11.
人工神经网络(ANN)已被应用于获取布里渊光时域分析仪(BOTDA)所测的布里渊频移信息(BFS),然而其存在易陷入局部最优和收敛速度慢等缺点。为了克服上述缺点,本文提出一种基于WOA优化人工神经网络(WOA-NN)快速获取布里渊光纤传感器BFS的方法;随后通过设计非线性收敛因子a,进一步构建基于非线性WOA优化的神经网络(NWOA-NN)用来提取BFS。将提出的2种网络与经典ANN、粒子群优化神经网络(PSO-NN)、遗传算法优化神经网络(GA-NN)等模型进行比较,实验结果表明,本文所提出的WOA-NN模型在提取BOTDA中的温度信息时的性能优于其他3个网络,其所获取的温度的平均RMSE分别低于ANN、PSO-NN和GA-NN约42.66%、52.51%以及45.93%,NWOA-NN模型所获取的平均RMSE进一步优于WOA-NN 19.08%。同时,使用ANN、PSO-NN、GA-NN、WOA-NN和NWOA-NN进行训练所花费的平均时间分别为929.71 s、889.49 s、699.36 s、580.06 s和549.12 s,所提出的2个网络训练时间表现亦较好。  相似文献   

12.
This paper proposes a novel swarm intelligence technique, which is an adaptation of Abbass’s marriage in honey-bee optimization (MBO), with the aim to achieve better overall performance than the original version of the MBO while also lowering the computation time for finding the optimal solution. The original MBO has been proven to be one of the best swarm intelligence algorithms for solving optimization problems. However, many parameters need to be properly set in order for the MBO to perform at its best. Therefore, long computation time caused by a large number of trial and error iterations involved in trying to find the right combination of parameters is unavoidable. The framework of the proposed algorithm is similar to the original MBO, which is based on the marriage behavior of honey-bees. In order to improve the efficiency of the MBO algorithm, several aspects of the original MBO have been adapted, such as (1) the proposed algorithm is adapted to obtain the ability to automatically search for the proper number of queens, (2) the proposed algorithm divides the problem space into several colonies, each of which has its own queen. In order to keep the number of colonies to a minimum, the proposed algorithm, therefore, encourages the queens to compete with each other for a larger colony and also urges the newly-born brood which is fitter than the queen of the colony to overthrow the queen. (3) the fuzzy c-means algorithm is employed to assign the drones to the proper colonies. The proposed algorithm has been evaluated and compared to the original MBO algorithm. The experimental results on six benchmark problems demonstrate the potential of the proposed algorithm in offering an efficient and effective solution to the problem.  相似文献   

13.
Nature-inspired optimization algorithms, notably evolutionary algorithms (EAs), have been widely used to solve various scientific and engineering problems because of to their simplicity and flexibility. Here we report a novel optimization algorithm, group search optimizer (GSO), which is inspired by animal behavior, especially animal searching behavior. The framework is mainly based on the producer-scrounger model, which assumes that group members search either for ldquofindingrdquo (producer) or for ldquojoiningrdquo (scrounger) opportunities. Based on this framework, concepts from animal searching behavior, e.g., animal scanning mechanisms, are employed metaphorically to design optimum searching strategies for solving continuous optimization problems. When tested against benchmark functions, in low and high dimensions, the GSO algorithm has competitive performance to other EAs in terms of accuracy and convergence speed, especially on high-dimensional multimodal problems. The GSO algorithm is also applied to train artificial neural networks. The promising results on three real-world benchmark problems show the applicability of GSO for problem solving.  相似文献   

14.
This paper proposes a novel optimization algorithm inspired by the ions motion in nature. In fact, the proposed algorithm mimics the attraction and repulsion of anions and cations to perform optimization. The proposed algorithm is designed in such a way to have the least tuning parameters, low computational complexity, fast convergence, and high local optima avoidance. The performance of this algorithm is benchmarked on 10 standard test functions and compared to four well-known algorithms in the literature. The results demonstrate that the proposed algorithm is able to show very competitive results and has merits in solving challenging optimization problems.  相似文献   

15.
R.  S.  N.  P. 《Neurocomputing》2009,72(16-18):3771
In a fully complex-valued feed-forward network, the convergence of the Complex-valued Back Propagation (CBP) learning algorithm depends on the choice of the activation function, learning sample distribution, minimization criterion, initial weights and the learning rate. The minimization criteria used in the existing versions of CBP learning algorithm in the literature do not approximate the phase of complex-valued output well in function approximation problems. The phase of a complex-valued output is critical in telecommunication and reconstruction and source localization problems in medical imaging applications. In this paper, the issues related to the convergence of complex-valued neural networks are clearly enumerated using a systematic sensitivity study on existing complex-valued neural networks. In addition, we also compare the performance of different types of split complex-valued neural networks. From the observations in the sensitivity analysis, we propose a new CBP learning algorithm with logarithmic performance index for a complex-valued neural network with exponential activation function. The proposed CBP learning algorithm directly minimizes both the magnitude and phase errors and also provides better convergence characteristics. Performance of the proposed scheme is evaluated using two synthetic complex-valued function approximation problems, the complex XOR problem, and a non-minimum phase equalization problem. Also, a comparative analysis on the convergence of the existing fully complex and split complex networks is presented.  相似文献   

16.
Constrained optimization problems arise in numerous scientific and engineering applications, and many papers on the online solution of constrained optimization problems using projection neural networks have been published in the literature. The purpose of this paper is to provide a comprehensive review of the research on projection neural networks for solving various constrained optimizations as well as their applications. Since convergence and stability are important for projection neural networks, theoretical results of projection neural networks are reviewed in detail. In addition, various applications of projection neural networks, e.g., the motion generation of redundant robot manipulators, coordination control of multiple robots with limited communications, generation of winner-take-all strategy, model predictive control and WSN localizations, are discussed and compared. Concluding remarks and future directions of projection neural networks as well as their applications are provided.  相似文献   

17.
Systems based on artificial neural networks have high computational rates owing to the use of a massive number of simple processing elements and the high degree of connectivity between these elements. Neural networks with feedback connections provide a computing model capable of solving a large class of optimization problems. This paper presents a novel approach for solving dynamic programming problems using artificial neural networks. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points. Simulated examples are presented and compared with other neural networks. The results demonstrate that the proposed method gives a significant improvement.  相似文献   

18.
科学与工程领域经常使用数值积分,为此提出了一种求解数值积分的新方法。其主要思想是通过训练神经网络权值 并用傅立叶级数 来近似未知函数 ,然后用 来近似积分 。提出并证明了神经网络算法的收敛性定理和数值积分的求解定理。数值积分算例验证了本文算法的有效性。研究结果表明,本文提出的数值积分方法有高的计算精度,在工程实际中有较大的应用价值。  相似文献   

19.

The conventional Butterfly Optimization Algorithm (BOA) does not appropriately balance the exploration and exploitation characteristics of an algorithm to solve present-day challenging optimization problems. For the same, in this paper, a novel hybrid BOA (MPBOA, in short) is suggested, where the BOA is combined with mutualism and parasitism phases of the Symbiosis Organisms Search (SOS) algorithm to enhance the search behaviour (both global and local) of BOA. The mutualism phase is applied with the global phase of BOA, and the parasitism phase is added with the local phase of BOA to ensure a better trade-off between the global and local search of the proposed algorithm. A suit of twenty-five benchmark functions is employed to investigate its performance with several other state-of-the-art algorithms available in the literature. Also, to check its performance statistically, the Friedman rank test and t-test are carried out. The consistency of the proposed algorithm is tested with a boxplot diagram. Also, four real-world problems are solved to check the efficiency of the algorithm in solving industrial problems. Finally, the proposed MPBOA is utilized to obtain the optimal threshold in the multilevel thresholding problem of the segmentation of individual images. From the obtained results, it is found that the overall performance of the newly introduced MPBOA is satisfactory in terms of its search behaviour and convergence time to obtain global optima.

  相似文献   

20.
A special class of recurrent neural networks (RNN) has recently been proposed by Zhang et al. for solving online time-varying matrix problems. Being different from conventional gradient-based neural networks (GNN), such RNN (termed specifically as Zhang neural networks, ZNN) are designed based on matrix-valued error functions, instead of scalar-valued norm-based energy functions. In this paper, we generalize and further investigate the ZNN model for time-varying matrix square root finding. For the purpose of possible hardware (e.g., digital circuit) realization, a discrete-time ZNN model is constructed and developed, which incorporates Newton iteration as a special case. Besides, to obtain an appropriate step-size value (in each iteration), a line-search algorithm is employed for the proposed discrete-time ZNN model. Computer-simulation results substantiate the effectiveness of the proposed ZNN model aided with a line-search algorithm, in addition to the connection and explanation to Newton iteration for matrix square root finding.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号