首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
一种自适应柯西变异的反向学习粒子群优化算法   总被引:1,自引:0,他引:1  
针对传统粒子群优化算法易出现早熟的问题,提出了一种自适应变异的反向学习粒子群优化算法。该算法在一般性反向学习方法的基础上,提出了自适应柯西变异策略(ACM)。采用一般性反向学习策略生成反向解,可扩大搜索空间,增强算法的全局勘探能力。为避免粒子陷入局部最优解而导致搜索停滞现象的发生,采用ACM策略对当前最优粒子进行扰动,自适应地获取变异点,在有效提高算法局部开采能力的同时,使算法能更加平稳快速地收敛到全局最优解。为进一步平衡算法的全局搜索与局部探测能力,采用非线性的自适应惯性权值。将算法在14个测试函数上与多种基于反向学习策略的PSO算法进行对比,实验结果表明提出的算法在解的精度以及收敛速度上得到了大幅度的提高。  相似文献   

2.
A probabilistic opposition-based Particle Swarm Optimization algorithm with Velocity Clamping and inertia weights (OvcPSO) is designed for function optimization—to accelerate the convergence speed and to optimize solution’s accuracy on standard benchmark functions. In this work, probabilistic opposition-based learning for particles is incorporated with PSO to enhance the convergence rate—it uses velocity clamping and inertia weights to control the position, speed and direction of particles to avoid premature convergence. A comprehensive set of 58 complex benchmark functions including a wide range of dimensions have been used for experimental verification. It is evident from the results that OvcPSO can deal with complex optimization problems effectively and efficiently. A series of experiments have been performed to investigate the influence of population size and dimensions upon the performance of different PSO variants. It also outperforms FDR-PSO, CLPSO, FIPS, CPSO-H and GOPSO on various benchmark functions. Last but not the least, OvcPSO has also been compared with opposition-based differential evolution (ODE); it outperforms ODE on lower swarm population and higher-dimensional functions.  相似文献   

3.
This paper presents a novel algorithm based on generalized opposition-based learning (GOBL) to improve the performance of differential evolution (DE) to solve high-dimensional optimization problems efficiently. The proposed approach, namely GODE, employs similar schemes of opposition-based DE (ODE) for opposition-based population initialization and generation jumping with GOBL. Experiments are conducted to verify the performance of GODE on 19 high-dimensional problems with D = 50, 100, 200, 500, 1,000. The results confirm that GODE outperforms classical DE, real-coded CHC (crossgenerational elitist selection, heterogeneous recombination, and cataclysmic mutation) and G-CMA-ES (restart covariant matrix evolutionary strategy) on the majority of test problems.  相似文献   

4.
李俊  汪冲  李波  方国康 《计算机应用》2016,36(3):681-686
针对粒子群优化(PSO)算法容易早熟收敛、在进化后期收敛精度低的缺点,提出了一种基于多策略协同作用的粒子群优化(MSPSO)算法。首先,设定一个概率阈值为0.3,在粒子迭代过程中,如果随机生成的概率值小于阈值,则采用对当前种群中的最优个体进行反向学习并生成其反向解,以提高算法的收敛速度和收敛精度;否则,算法执行对粒子的位置进行高斯变异策略,以增强种群的多样性;其次,提出一种将柯西分布的比例参数进行线性递减的柯西变异策略,能够产生更好的解引导粒子向最优解空间运动;最后,在8个标准测试函数上进行仿真测试,MSPSO算法在Rosenbrock、Schwefel's P2.22、Rotated Ackley、Quadric Noise、Ackley函数上收敛的平均值分别为1.68E+01、2.36E-283、8.88E-16、2.78E-05、8.88E-16,在Sphere、Griewank和Rastrigin函数上收敛达到最优解0,优于高斯扰动粒子群优化(GDPSO)算法、基于柯西变异的反向学习粒子群优化(GOPSO)算法。结果表明,所提出的算法收敛精度高,能避免粒子陷入局部最优。  相似文献   

5.
余伟伟  谢承旺 《计算机科学》2018,45(Z6):120-123
针对传统粒子群优化算法在解决一些复杂优化问题时易陷入局部最优且收敛速度较慢的问题,提出一种多策略混合的粒子群优化算法(Hybrid Particle Swarm Optimization with Multiply Strategies,HPSO)。该算法利用反向学习策略产生反向解群,扩大粒子群搜索的范围,增强算法的全局勘探能力;同时,为避免种群陷入局部最优,算法对种群中部分较差的个体实施柯西变异,以产生远离局部极值的个体,而对群体中较好的个体施以差分进化变异,以增强算法的局部开采能力。对这3种策略进行了有机结合以更好地平衡粒子群算法全局勘探和局部开采的能力。将HPSO算法与其他3种知名的粒子群算法在10个标准测试函数上进行了性能比较实验,结果表明HPSO算法在求解精度和收敛速度上具有较显著的优势。  相似文献   

6.
Particle swarm optimization (PSO) is a population based algorithm for solving global optimization problems. Owing to its efficiency and simplicity, PSO has attracted many researchers’ attention and developed many variants. Orthogonal learning particle swarm optimization (OLPSO) is proposed as a new variant of PSO that relies on a new learning strategy called orthogonal learning strategy. The OLPSO differs in the utilization of the information of experience from the standard PSO, in which each particle utilizes its historical best experience and globally best experience through linear summation. In OLPSO, particles can fly in better directions by constructing an efficient exemplar through orthogonal experimental design. However, the global version based orthogonal learning PSO (OLPSO-G) still have some drawbacks in solving some complex multimodal function optimization. In this paper, we proposed a quadratic interpolation based OLPSO-G (QIOLPSO-G), in which, a quadratic interpolation based construction strategy for the personal historical best experience is applied. Meanwhile, opposition-based learning, and Gaussian mutation are also introduced into this paper to increase the diversity of the population and discourage the premature convergence. Experiments are conducted on 16 benchmark problems to validate the effectiveness of the QIOLPSO-G, and comparisons are made with four typical PSO algorithms. The results show that the introduction of the three strategies does enhance the effectiveness of the algorithm.  相似文献   

7.
Evolutionary algorithms (EAs), which have been widely used to solve various scientific and engineering optimization problems, are essentially stochastic search algorithms operating in the overall solution space. However, such random search mechanism may lead to some disadvantages such as a long computing time and premature convergence. In this study, we propose a space search optimization algorithm (SSOA) with accelerated convergence strategies to alleviate the drawbacks of the purely random search mechanism. The overall framework of the SSOA involves three main search mechanisms: local space search, global space search, and opposition-based search. The local space search that aims to form new solutions approaching the local optimum is realized based on the concept of augmented simplex method, which exhibits significant search abilities realized in some local space. The global space search is completed by Cauchy searching, where the approach itself is based on the Cauchy mutation. This operation can help the method avoid of being trapped in local optima and in this way alleviate premature convergence. An opposition-based search is exploited to accelerate the convergence of space search. This operator can effectively reduce a substantial computational overhead encountered in evolutionary algorithms (EAs). With the use of them SSOA realizes an effective search process. To evaluate the performance of the method, the proposed SSOA is contrasted with a method of differential evolution (DE), which is a well-known space concept-based evolutionary algorithm. When tested against benchmark functions, the SSOA exhibits a competitive performance vis-a-vis performance of some other competitive schemes of differential evolution in terms of accuracy and speed of convergence, especially in case of high-dimensional continuous optimization problems.  相似文献   

8.
钱晓宇  方伟 《控制与决策》2021,36(4):779-789
为提升粒子群优化算法在复杂优化问题,特别是高维优化问题上的优化性能,提出一种基于Solis&Wets局部搜索的反向学习竞争粒子群优化算法(solis and wets-opposition based learning competitive particle swarm optimizer with local search, SW-OBLCSO). SW-OBLCSO算法采用竞争学习和反向学习两种学习机制,并设计了基于个体的局部搜索算子.利用10个常用基准测试函数和12个带有偏移旋转的复杂测试函数,在不同维度情况下将SW-OBLCSO算法与多种优化算法进行对比.实验结果表明,所提出算法在收敛速度和全局搜索能力上表现出突出的性能.对模糊认知图(fuzzy cognitive maps)学习问题的测试表明, SW-OBLCSO算法在处理实际问题时同样具有出色的性能.  相似文献   

9.
针对标准灰狼优化算法在求解复杂工程优化问题时存在求解精度不高和易陷入局部最优的缺点,提出一种新型灰狼优化算法用于求解无约束连续函数优化问题。该算法首先利用反向学习策略产生初始种群个体,为算法全局搜索奠定基础;受粒子群优化算法的启发,提出一种非线性递减收敛因子更新公式,其动态调整以平衡算法的全局搜索能力和局部搜索能力;为避免算法陷入局部最优,对当前最优灰狼个体进行变异操作。对10个测试函数进行仿真实验,结果表明,与标准灰狼优化算法相比,改进灰狼优化算法具有更好的求解精度和更快的收敛速度。  相似文献   

10.
阿奎拉鹰优化算法(Aquila optimizer, AO)和哈里斯鹰优化算法(Harris hawks optimization, HHO)是近年提出的优化算法。AO算法全局寻优能力强,但收敛精度低,容易陷入局部最优,而HHO算法具有较强的局部开发能力,但存在全局探索能力弱,收敛速度慢的缺陷。针对原始算法存在的局限性,本文将两种算法混合并引入动态反向学习策略,提出一种融合动态反向学习的阿奎拉鹰与哈里斯鹰混合优化算法。首先,在初始化阶段引入动态反向学习策略提升混合算法初始化性能与收敛速度。此外,混合算法分别保留了AO的探索机制与HHO的开发机制,提高算法的寻优能力。仿真实验采用23个基准测试函数和2个工程设计问题测试混合算法优化性能,并对比了几种经典反向学习策略,结果表明引入动态反向学习的混合算法收敛性能更佳,能够有效求解工程设计问题。  相似文献   

11.
Differential evolution (DE) is a well-known optimization approach to deal with nonlinear and complex optimization problems. However, many real-world optimization problems are constrained problems that involve equality and inequality constraints. DE with constraint handling techniques, named constrained differential evolution (CDE), can be used to solve constrained optimization problems. In this paper, we propose a new CDE framework that uses generalized opposition-based learning (GOBL), named GOBL-CDE. In GOBL-CDE, firstly, the transformed population is generated using general opposition-based learning in the population initialization. Secondly, the transformed population and the initial population are merged and only half of the best individuals are selected to compose the new initial population to proceed mutation, crossover, and selection. Lastly, based on a jumping probability, the transformed population is calculated again after generating new populations, and the fittest individuals are selected to compose new population from the union of the current population and the transformed population. The GOBL-CDE framework can be applied to most CDE variants. As examples, in this study, the framework is applied to two popular representative CDE variants, i.e., rank-iMDDE and \(\varepsilon \)DEag. Experiment results on 24 benchmark functions from CEC’2006 and 18 benchmark functions from CEC’2010 show that the proposed framework is an effective approach to enhance the performance of CDE algorithms.  相似文献   

12.
和其他优化算法相比,粒子群算法有着简单易实现以及寻优结果快的优点,但研究结果表明标准粒子群算法在优化过程中存在着易于陷入最小的缺陷。文章提出了一种基于Cauchy策略的量子-粒子群算法。标准测试函数的仿真结果表明,新的算法不仅能够提高算法的全局搜索能力,而且能够加快算法的寻优速度,能够应用在实际工程中的函数优化问题。  相似文献   

13.
Solving high-dimensional global optimization problems is a time-consuming task because of the high complexity of the problems. To reduce the computational time for high-dimensional problems, this paper presents a parallel differential evolution (DE) based on Graphics Processing Units (GPUs). The proposed approach is called GOjDE, which employs self-adapting control parameters and generalized opposition-based learning (GOBL). The adapting parameters strategy is helpful to avoid manually adjusting the control parameters, and GOBL is beneficial for improving the quality of candidate solutions. Simulation experiments are conducted on a set of recently proposed high-dimensional benchmark problems with dimensions of 100, 200, 500 and 1,000. Simulation results demonstrate that GjODE is better than, or at least comparable to, six other algorithms, and employing GPU can effectively reduce computational time. The obtained maximum speedup is up to 75.  相似文献   

14.
The particle swarm optimization (PSO) algorithm is widely used in identifying Takagi-Sugeno (T-S) fuzzy system models. However, PSO suffers from premature convergence and is easily trapped into local optima, which affects the accuracy of T-S model identification. An immune coevolution particle swarm optimization with multi-strategy (ICPSO-MS) is proposed for modeling T-S fuzzy systems. The proposed ICPSO-MS consists of one elite subswarm and several normal subswarms. Each normal subswarm adopts a different strategy for adjusting the acceleration coefficients. A Cauchy learning operator is used to accelerate the convergence of the normal subswarm. During the iteration step, the best individual in each normal subswarm is added to the elite subswarm. Using adaptive hyper-mutation, the immune clonal selection operator is used to optimize the elite subswarm while the individuals in the elite subswarm migrate to the normal subswarms. This shared migration mechanism allows full exchange of information and coevolution. The performance of the proposed algorithm is evaluated on a suite of numerical optimization functions. The results show good performance of ICPSO-MS in solving numerical problems when compared with other recent variants of PSO. The performance of ICPSO-MS is further evaluated when identifying the T-S model, with simulation results on several typical nonlinear systems showing that the proposed method generates a good T-S fuzzy model with high accuracy and strong generalizability.  相似文献   

15.
何庆  林杰  徐航 《控制与决策》2021,36(7):1558-1568
由于位置更新公式存在局部开发能力较强而全局探索能力较弱的缺陷,导致蝗虫优化算法(GOA)易陷入局部最优以及早熟收敛,对此,提出一种混合柯西变异和均匀分布的蝗虫优化算法(HCUGOA).受柯西算子和粒子群算法的启发,提出具有分段思想的位置更新方式以增加种群多样性,增强全局探索能力;将柯西变异算子与反向学习策略相融合,对最优位置即目标值进行变异更新,提高算法跳出局部最优的能力;为了更好地平衡全局探索与局部开发,将均匀分布函数引入非线性控制参数c,构建新的随机调整策略.通过对12个基准函数和CEC2014函数进行仿真实验以及Wilcoxon秩和检验的方法来评估改进算法的寻优能力,实验结果表明,HCUGOA算法在收敛精度和收敛速度等方面都得到极大的改进.  相似文献   

16.
刘宝  董明刚  敬超 《计算机应用》2018,38(8):2157-2163
针对多目标差分进化算法在求解问题时收敛速度慢和均匀性欠佳的问题,提出了一种改进的排序变异多目标差分进化算法(MODE-IRM)。该算法将参与变异的三个父代个体中的最优个体作为基向量,提高了排序变异算子的求解速度;另外,算法采用反向参数控制方法在不同的优化阶段动态调整参数值,进一步提高了算法的收敛速度;最后,引入了改进的拥挤距离计算公式进行排序操作,提高了解的均匀性。采用标准多目标优化问题ZDTl~ZDT4,ZDT6和DTLZ6~DTLZ7进行仿真实验:MODE-IRM在总体性能上均优于MODE-RMO和PlatEMO平台上的MOEA/D-DE、RM-MEDA以及IM-MOEA;在世代距离(GD)、反向世代距离(IGD)和间隔指标(SP)性能度量指标方面,MODE-IRM在所有优化问题上的均值和方差均明显小于MODE-RMO。实验结果表明MODE-IRM在收敛性和均匀性指标上明显优于对比算法。  相似文献   

17.
韩红桂  徐子昂  王晶晶 《控制与决策》2023,38(11):3039-3047
多任务粒子群优化算法(multi-task particle swarm ptimization, MTPSO)通过知识迁移学习,具有快速收敛能力,广泛应用于求解多任务多目标优化问题.然而, MTPSO难以根据种群进化状态自适应调整优化过程,容易陷入局部最优,收敛性能较差.针对此问题,利用强化学习的自我进化与预测能力,提出一种基于Q学习的多任务多目标粒子群优化算法(QM2PSO).首先,设计粒子群参数动态更新方法,利用Q学习方法在线更新粒子群算法的惯性权重和加速度参数,提高当前粒子收敛到Pareto前沿的能力;其次,提出基于柯西分布的突变搜索策略,通过全局和局部交替搜索多任务最优解,避免算法陷入局部最优;最后,设计基于正向迁移准则的知识迁移方法,采用Q学习方法更新知识迁移率,改善知识负迁移现象.与已有经典算法的对比实验结果表明所提出的QM2PSO算法具有更优越的收敛性.  相似文献   

18.
针对标准群搜索优化算法在解决一些复杂优化问题时容易陷入局部最优且收敛速度较慢的问题,提出一种应用反向学习和差分进化的群搜索优化算法(Group Search Optimization with Opposition-based Learning and Diffe-rential Evolution,OBDGSO)。该算法利用一般动态反向学习机制产生反向种群,扩大算法的全局勘探范围;对种群中较优解个体实施差分进化的变异操作,实现在较优解附近的局部开采,以改善算法的求解精度和收敛速度。这两种策略在GSO算法中相互协同,以更好地平衡算法的全局搜索能力和局部开采能力。将OBDGSO算法和另外4种群智能算法在12个基准测试函数上进行实验,结果表明OBDGSO算法在求解精度和收敛速度上具有较显著的性能优势。  相似文献   

19.
郭雨鑫  刘升  张磊  黄倩 《计算机应用研究》2021,38(12):3651-3656
针对基本黏菌算法(slime mould algorithm,SMA)易陷入局部最优值、收敛精度较低和收敛速度较慢的问题,提出精英反向学习与二次插值改进的黏菌算法(improved slime mould algorithm,ISMA).精英反向学习策略有利于提高黏菌种群多样性和种群质量,提升算法全局寻优性能与收敛精度;利用二次插值生成新的黏菌个体,并用适应度评估更新全局最优解,有利于增强算法局部开发能力,减少算法收敛时间,使算法跳出局部极值.通过求解多个单模态、多模态和高维度测试函数进行不同算法之间的对比,结果显示,结合两种策略的ISMA具有较高的寻优精度、寻优速度和鲁棒性.  相似文献   

20.
保存基因的2-Opt一般反向差分演化算法   总被引:1,自引:0,他引:1  
为了进一步提高差分演化算法的性能,提出一种采用保存基因的2-Opt一般反向差分演化算法,并把它应用于函数优化问题中.新算法具有以下特征:(1)采用保存被选择个体基因的方式组成参加演化的新个体.保存基因的方法可以很好的保持种群多样性;(2)采用一般反向学习(GOBL)机制进行初始化,提高了初始化效率;(3)采用2-Opt算法加速差分演化算法的收敛速度,提高搜索效率.通过测试函数的实验,并与其他差分演化算法进行比较.实验结果证实了新算法的高效性,通用性和稳健性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号