首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
钱晓宇  方伟 《控制与决策》2021,36(4):779-789
为提升粒子群优化算法在复杂优化问题,特别是高维优化问题上的优化性能,提出一种基于Solis&Wets局部搜索的反向学习竞争粒子群优化算法(solis and wets-opposition based learning competitive particle swarm optimizer with local search, SW-OBLCSO). SW-OBLCSO算法采用竞争学习和反向学习两种学习机制,并设计了基于个体的局部搜索算子.利用10个常用基准测试函数和12个带有偏移旋转的复杂测试函数,在不同维度情况下将SW-OBLCSO算法与多种优化算法进行对比.实验结果表明,所提出算法在收敛速度和全局搜索能力上表现出突出的性能.对模糊认知图(fuzzy cognitive maps)学习问题的测试表明, SW-OBLCSO算法在处理实际问题时同样具有出色的性能.  相似文献   

2.
一种自适应柯西变异的反向学习粒子群优化算法   总被引:1,自引:0,他引:1  
针对传统粒子群优化算法易出现早熟的问题,提出了一种自适应变异的反向学习粒子群优化算法。该算法在一般性反向学习方法的基础上,提出了自适应柯西变异策略(ACM)。采用一般性反向学习策略生成反向解,可扩大搜索空间,增强算法的全局勘探能力。为避免粒子陷入局部最优解而导致搜索停滞现象的发生,采用ACM策略对当前最优粒子进行扰动,自适应地获取变异点,在有效提高算法局部开采能力的同时,使算法能更加平稳快速地收敛到全局最优解。为进一步平衡算法的全局搜索与局部探测能力,采用非线性的自适应惯性权值。将算法在14个测试函数上与多种基于反向学习策略的PSO算法进行对比,实验结果表明提出的算法在解的精度以及收敛速度上得到了大幅度的提高。  相似文献   

3.
引力搜索算法是最近提出的一种较有竞争力的群智能优化技术,然而,标准引力算法存在的收敛速度慢、容易在进化过程中陷入停滞状态.针对上述问题,提出一种改进的引力搜索算法.该算法采用混沌反学习策略初始化种群,以便获得遍历整个解空间的初始种群,进而提高算法的收敛速度和解的精度.此外,该算法利用人工蜂群搜索策略很强的探索能力,对种群进行引导以帮助算法快速跳出局部最优点.通过对13个非线性基准函数进行仿真实验,验证了改进的引力搜索算法的有效性和优越性.  相似文献   

4.
针对麻雀搜索算法(SSA)在寻优后期出现能力不足、种群多样性损失、易落进局部极值现象,造成SSA算法收敛速度慢、探索能力不足等问题,提出了融合正余弦和柯西变异的麻雀搜索算法(SCSSA)。借助折射反向学习机制初始化种群,增加物种多样性;在发现者位置更新中引入正余弦策略以及非线性递减搜索因子和权重因子协调算法的全局和局部寻优能力;在跟随者位置中引入柯西变异对最优解进行扰动更新,提高算法获取全局最优解能力。通过10个经典测试函数对SCSSA算法在收敛速度、收敛精度、平均绝对误差等指标的评估,并引进工程设计优化问题进行验证。实验结果证明改进后的麻雀搜索算法在收敛速度和寻优精度有明显增强,表现出良好的鲁棒性。  相似文献   

5.
针对传统免疫网络动态优化算法局部寻优能力弱、寻优精度低及易早熟收敛的缺点,提出一种求解动态优化问题的免疫文化基因算法。基于文化基因算法基本框架,将人工免疫网络算法作为全局搜索算法,采用禁忌搜索算法作为局部搜索算子;同时引入柯西变异加强算法的全局搜索能力,并有效防止早熟收敛。通过对经典动态优化函数测试集在相同条件下的实验表明,该免疫文化基因算法相较于其他同类算法具有较好的搜索精度和收敛速度。  相似文献   

6.
The global optimization problem is not easy to solve and is still an open challenge for researchers since an analytical optimal solution is difficult to obtain even for relatively simple application problems. Conventional deterministic numerical algorithms tend to stop the search in local minimum nearest to the input starting point, mainly when the optimization problem presents nonlinear, non-convex and non-differential functions, multimodal and nonlinear. Nowadays, the use of evolutionary algorithms (EAs) to solve optimization problems is a common practice due to their competitive performance on complex search spaces. EAs are well known for their ability to deal with nonlinear and complex optimization problems. The primary advantage of EAs over other numerical methods is that they just require the objective function values, while properties such as differentiability and continuity are not necessary. In this context, the differential evolution (DE), a paradigm of the evolutionary computation, has been widely used for solving numerical global optimization problems in continuous search space. DE is a powerful population-based stochastic direct search method. DE simulates natural evolution combined with a mechanism to generate multiple search directions based on the distribution of solutions in the current population. Among DE advantages are its simple structure, ease of use, speed, and robustness, which allows its application on several continuous nonlinear optimization problems. However, the performance of DE greatly depends on its control parameters, such as crossover rate, mutation factor, and population size and it often suffers from being trapped in local optima. Conventionally, users have to determine the parameters for problem at hand empirically. Recently, several adaptive variants of DE have been proposed. In this paper, a modified differential evolution (MDE) approach using generation-varying control parameters (mutation factor and crossover rate) is proposed and evaluated. The proposed MDE presents an efficient strategy to improve the search performance in preventing of premature convergence to local minima. The efficiency and feasibility of the proposed MDE approach is demonstrated on a force optimization problem in Robotics, where the force capabilities of a planar 3-RRR parallel manipulator are evaluated considering actuation limits and different assembly modes. Furthermore, some comparison results of MDE approach with classical DE to the mentioned force optimization problem are presented and discussed.  相似文献   

7.
针对标准群搜索优化算法在解决一些复杂优化问题时容易陷入局部最优且收敛速度较慢的问题,提出一种应用反向学习和差分进化的群搜索优化算法(Group Search Optimization with Opposition-based Learning and Diffe-rential Evolution,OBDGSO)。该算法利用一般动态反向学习机制产生反向种群,扩大算法的全局勘探范围;对种群中较优解个体实施差分进化的变异操作,实现在较优解附近的局部开采,以改善算法的求解精度和收敛速度。这两种策略在GSO算法中相互协同,以更好地平衡算法的全局搜索能力和局部开采能力。将OBDGSO算法和另外4种群智能算法在12个基准测试函数上进行实验,结果表明OBDGSO算法在求解精度和收敛速度上具有较显著的性能优势。  相似文献   

8.
群搜索优化算法(Group Search Optimizer,GSO)是一类基于发现者-加入者(Producer-Scrounger,PS)模型的新型群体随机搜索算法。尽管该算法在解决众多问题中表现优越,但其依然面临着早熟和易陷入局部最优的问题,为此,提出了一种基于一般反向学习策略的群搜索优化算法(GOGSO)。该算法利用反向学习策略来产生反向种群,然后对当前种群和反向种群进行精英选择。通过对比实验表明,该方法效果良好。  相似文献   

9.
Particle swarm optimization (PSO) has been shown to yield good performance for solving various optimization problems. However, it tends to suffer from premature convergence when solving complex problems. This paper presents an enhanced PSO algorithm called GOPSO, which employs generalized opposition-based learning (GOBL) and Cauchy mutation to overcome this problem. GOBL can provide a faster convergence, and the Cauchy mutation with a long tail helps trapped particles escape from local optima. The proposed approach uses a similar scheme as opposition-based differential evolution (ODE) with opposition-based population initialization and generation jumping using GOBL. Experiments are conducted on a comprehensive set of benchmark functions, including rotated multimodal problems and shifted large-scale problems. The results show that GOPSO obtains promising performance on a majority of the test problems.  相似文献   

10.
为了解决布谷鸟搜索算法寻优精度不高、收敛速度慢、后期搜索活力不足以及处理高维优化问题时存在维间干扰等缺陷,提出了逐维反向学习策略的动态适应布谷鸟算法。首先,对选择更新后的解进行逐维反向学习,减少维间干扰,扩大种群多样性;然后,使用精英保留方式评价该结果,提高算法寻优能力;最后,充分利用当前解的信息进行动态适应的缩放因子控制,引导解快速收敛,提升算法搜索活力。实验结果表明,该算法相比较于标准布谷鸟搜索算法,寻优精度、收敛速度以及后期搜索活力有所提高,与其他改进算法相比也具有一定的竞争优势。  相似文献   

11.
针对教与学优化算法易早熟,解精度低,甚至收敛于局部最优的问题,提出一种新的融合改进天牛须搜索的教与学优化算法.该算法利用Tent映射反向学习策略初始化种群,提升初始解质量.在"教"阶段,对教师个体执行天牛须搜索算法,增强教师教学水平,提高最优解的精确性.在"学"阶段,对学生个体进行混合变异,从而跳出局部最优,平衡算法的...  相似文献   

12.
动态多目标优化问题(DMOPs)需要进化算法跟踪不断变化的Pareto最优前沿,从而在检测到环境变化时能够及时有效地做出响应.为了解决上述问题,提出一种基于决策变量关系的动态多目标优化算法.首先,通过决策变量对收敛性和多样性贡献大小的检测机制将决策变量分为收敛性相关决策变量(CV)和多样性相关决策变量(DV),对不同类型决策变量采用不同的优化策略;其次,提出一种局部搜索多样性维护机制,使个体在Pareto前沿分布更加均匀;最后,对两部分产生的组合个体进行非支配排序构成新环境下的种群.为了验证DVR的性能,将DVR与3种动态多目标优化算法在15个基准测试问题上进行比较,实验结果表明, DVR算法相较于其他3种算法表现出更优的收敛性和多样性.  相似文献   

13.
针对群搜索优化(Group Search Optimizer,GSO)算法易陷入局部最优、收敛速度较慢、收敛精度较低等问题,提出一种基于差分策略的群搜索优化(Differential Ranking-based Group Search Optimizer,DRGSO)算法。主要进行两方面改进:1)按照适应度值的大小对种群进行排序,适当增加发现者的数目,使种群能够获得更好的启发式信息,加快了算法的收敛速度,有效地避免了算法陷入局部最优;2)在发现者搜索过程中,引入4种不同的差分变异策略,提高了算法的收敛精度,增强了算法的群体多样性在。11组国际标准测试函数上的实验测试结果显示,与GA,GSO,PSO算法相比,DRGSO算法具有较强的全局搜索能力以及局部资源勘探能力,算法整体收敛性能明显提高。  相似文献   

14.
Nature-inspired optimization algorithms, notably evolutionary algorithms (EAs), have been widely used to solve various scientific and engineering problems because of to their simplicity and flexibility. Here we report a novel optimization algorithm, group search optimizer (GSO), which is inspired by animal behavior, especially animal searching behavior. The framework is mainly based on the producer-scrounger model, which assumes that group members search either for ldquofindingrdquo (producer) or for ldquojoiningrdquo (scrounger) opportunities. Based on this framework, concepts from animal searching behavior, e.g., animal scanning mechanisms, are employed metaphorically to design optimum searching strategies for solving continuous optimization problems. When tested against benchmark functions, in low and high dimensions, the GSO algorithm has competitive performance to other EAs in terms of accuracy and convergence speed, especially on high-dimensional multimodal problems. The GSO algorithm is also applied to train artificial neural networks. The promising results on three real-world benchmark problems show the applicability of GSO for problem solving.  相似文献   

15.
雍欣  高岳林  赫亚华  王惠敏 《计算机应用》2022,42(12):3847-3855
针对传统萤火虫算法(FA)中存在的易陷入局部最优及收敛速度慢等问题,把莱维飞行和精英参与的交叉算子及精英反向学习机制融入到萤火虫优化算法中,提出了一种多策略融合的改进萤火虫算法——LEEFA。首先,在传统萤火虫算法的基础上引入莱维飞行,从而提升算法的全局搜索能力;其次,提出精英参与的交叉算子以提升算法的收敛速度和精度,并增强算法迭代过程中解的多样性和质量;最后,结合精英反向学习机制进行最优解的搜索,从而提高FA跳出局部最优的能力和收敛性能,并实现对于解搜索空间的迅速勘探。为验证所提出的算法的有效性,在基准测试函数上进行了仿真实验,结果表明相较于粒子群优化(PSO)算法、传统FA、莱维飞行萤火虫算法(LFFA)、基于莱维飞行和变异算子的萤火虫算法(LMFA)和自适应对数螺旋-莱维飞行萤火虫优化算法(ADIFA)等算法,所提算法在收敛速度和精度上均表现得更为优异。  相似文献   

16.
许秋艳  马良  刘勇 《计算机应用》2020,40(8):2305-2312
针对基本阴阳平衡优化(YYPO)算法易早熟收敛的问题,基于混沌的遍历性,在算法中引入混沌搜索对更多区域进行探索,以提高全局探索能力。此外,借鉴《易经》中的错卦变换引入反向学习策略,对当前解的反向解进行集中搜索,提高局部开发能力。同时,为充分利用多核处理器等计算资源,还对算法进行了并行程序设计。采用标准测试函数进行数值实验,以测试基于混沌搜索和错卦变换的改进YYPO(CSIOYYPO)算法的求解性能。实验结果表明,与基本YYPO算法和自适应YYPO算法等YYPO算法以及其他类型智能优化算法相比,CSIOYYPO算法具有更高的计算精度和更快的优化速度。  相似文献   

17.
Cuckoo search (CS) is one of the well-known evolutionary techniques in global optimization. Despite its efficiency and wide use, CS suffers from premature convergence and poor balance between exploration and exploitation. To address these issues, a new CS extension namely snap-drift cuckoo search (SDCS) is proposed in this study. The proposed algorithm first employs a learning strategy and then considers improved search operators. The learning strategy provides an online trade-off between local and global search via two snap and drift modes. In snap mode, SDCS tends to increase global search to prevent algorithm of being trapped in a local minima; and in drift mode, it reinforces the local search to enhance the convergence rate. Thereafter, SDCS improves search capability by employing new crossover and mutation search operators. The accuracy and performance of the proposed approach are evaluated by well-known benchmark functions. Statistical comparisons of experimental results show that SDCS is superior to CS, modified CS (MCS), and state-of-the-art optimization algorithms in terms of convergence speed and robustness.  相似文献   

18.
针对基本花授粉算法(FPA)收敛速度慢、寻优精度低以及容易陷入局部最优的缺点,提出了一种基于动态全局搜索和柯西变异的花授粉算法DCFPA。利用混沌映射增强花粉种群初始分布的随机性和均匀性,在全局授粉过程中,引入全局平均最优花粉位置和动态权重递减因子共同实现花粉个体位置的更新,牵引算法朝着正确的搜索方向进行,避免算法早熟收敛,最后利用Cauchy变异,增加种群多样性,帮助算法跳出局部最优。对6个测试函数进行仿真实验表明,DCFPA算法比FPA具有更好的全局优化能力,提升了算法的收敛速度与求解精度;与相关的改进算法比较结果也表明,DCFPA整体上也具有更好的优化性能。  相似文献   

19.
一种多群进化规划算法   总被引:4,自引:0,他引:4  
在分析了导致进化规划算法早熟原因的基础上,提出了一种改进的多群进化规划算法。在该算法中,进化在多个不同的子群闰并行进行,通过使用不同的变异策略,实现种群在解空间具有尽可能分散探索能力的同时,在局部具有尽可能细致的搜索能力。通过子群重组实现子群间的信息交换,基于典型算例的数字仿真证明,该算法具有更好的全局收敛性,更快的收敛速度和更强的鲁棒性。  相似文献   

20.
阿奎拉鹰优化算法(Aquila optimizer, AO)和哈里斯鹰优化算法(Harris hawks optimization, HHO)是近年提出的优化算法。AO算法全局寻优能力强,但收敛精度低,容易陷入局部最优,而HHO算法具有较强的局部开发能力,但存在全局探索能力弱,收敛速度慢的缺陷。针对原始算法存在的局限性,本文将两种算法混合并引入动态反向学习策略,提出一种融合动态反向学习的阿奎拉鹰与哈里斯鹰混合优化算法。首先,在初始化阶段引入动态反向学习策略提升混合算法初始化性能与收敛速度。此外,混合算法分别保留了AO的探索机制与HHO的开发机制,提高算法的寻优能力。仿真实验采用23个基准测试函数和2个工程设计问题测试混合算法优化性能,并对比了几种经典反向学习策略,结果表明引入动态反向学习的混合算法收敛性能更佳,能够有效求解工程设计问题。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号