首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 156 毫秒
1.
Artificial bee colony (ABC) algorithm, one of the swarm intelligence algorithms, has been proposed for continuous optimization, inspired intelligent behaviors of real honey bee colony. For the optimization problems having binary structured solution space, the basic ABC algorithm should be modified because its basic version is proposed for solving continuous optimization problems. In this study, an adapted version of ABC, ABCbin for short, is proposed for binary optimization. In the proposed model for solving binary optimization problems, despite the fact that artificial agents in the algorithm works on the continuous solution space, the food source position obtained by the artificial agents is converted to binary values, before the objective function specific for the problem is evaluated. The accuracy and performance of the proposed approach have been examined on well-known 15 benchmark instances of uncapacitated facility location problem, and the results obtained by ABCbin are compared with the results of continuous particle swarm optimization (CPSO), binary particle swarm optimization (BPSO), improved binary particle swarm optimization (IBPSO), binary artificial bee colony algorithm (binABC) and discrete artificial bee colony algorithm (DisABC). The performance of ABCbin is also analyzed under the change of control parameter values. The experimental results and comparisons show that proposed ABCbin is an alternative and simple binary optimization tool in terms of solution quality and robustness.  相似文献   

2.

In machine learning, searching for the optimal feature subset from the original datasets is a very challenging and prominent task. The metaheuristic algorithms are used in finding out the relevant, important features, that enhance the classification accuracy and save the resource time. Most of the algorithms have shown excellent performance in solving feature selection problems. A recently developed metaheuristic algorithm, gaining-sharing knowledge-based optimization algorithm (GSK), is considered for finding out the optimal feature subset. GSK algorithm was proposed over continuous search space; therefore, a total of eight S-shaped and V-shaped transfer functions are employed to solve the problems into binary search space. Additionally, a population reduction scheme is also employed with the transfer functions to enhance the performance of proposed approaches. It explores the search space efficiently and deletes the worst solutions from the search space, due to the updation of population size in every iteration. The proposed approaches are tested over twenty-one benchmark datasets from UCI repository. The obtained results are compared with state-of-the-art metaheuristic algorithms including binary differential evolution algorithm, binary particle swarm optimization, binary bat algorithm, binary grey wolf optimizer, binary ant lion optimizer, binary dragonfly algorithm, binary salp swarm algorithm. Among eight transfer functions, V4 transfer function with population reduction on binary GSK algorithm outperforms other optimizers in terms of accuracy, fitness values and the minimal number of features. To investigate the results statistically, two non-parametric statistical tests are conducted that concludes the superiority of the proposed approach.

  相似文献   

3.
Zhan  Zhi-Hui  Shi  Lin  Tan  Kay Chen  Zhang  Jun 《Artificial Intelligence Review》2022,55(1):59-110

Complex continuous optimization problems widely exist nowadays due to the fast development of the economy and society. Moreover, the technologies like Internet of things, cloud computing, and big data also make optimization problems with more challenges including Many-dimensions, Many-changes, Many-optima, Many-constraints, and Many-costs. We term these as 5-M challenges that exist in large-scale optimization problems, dynamic optimization problems, multi-modal optimization problems, multi-objective optimization problems, many-objective optimization problems, constrained optimization problems, and expensive optimization problems in practical applications. The evolutionary computation (EC) algorithms are a kind of promising global optimization tools that have not only been widely applied for solving traditional optimization problems, but also have emerged booming research for solving the above-mentioned complex continuous optimization problems in recent years. In order to show how EC algorithms are promising and efficient in dealing with the 5-M complex challenges, this paper presents a comprehensive survey by proposing a novel taxonomy according to the function of the approaches, including reducing problem difficulty, increasing algorithm diversity, accelerating convergence speed, reducing running time, and extending application field. Moreover, some future research directions on using EC algorithms to solve complex continuous optimization problems are proposed and discussed. We believe that such a survey can draw attention, raise discussions, and inspire new ideas of EC research into complex continuous optimization problems and real-world applications.

  相似文献   

4.
Evolutionary algorithms start with an initial population vector, which is randomly generated when no preliminary knowledge about the solution is available. Recently, it has been claimed that in solving continuous domain optimization problems, the simultaneous consideration of randomness and opposition is more effective than pure randomness. In this paper it is mathematically proven that this scheme, called opposition-based learning, also does well in binary spaces. The proposed binary opposition-based scheme can be embedded inside many binary population-based algorithms. We applied it to accelerate the convergence rate of binary gravitational search algorithm (BGSA) as an application. The experimental results and mathematical proofs confirm each other.  相似文献   

5.
提出了一种用于求解0-1背包问题的混合差异演化算法,详细阐述了该算法求解背包问题的具体操作过程。算法主要使用了两个思想策略,即启发式贪婪算法和基于二进制编码的差异演化算法。通过对其它文献中仿真实例的计算和结果对比,表明该算法对求解0-1背包问题的有效性,这对差异演化算法解决其它离散问题会有些帮助。  相似文献   

6.
Gravitation Field Algorithm (GFA) is a novel optimization algorithm derived from the Solar Nebular Disk Model (SNDM) in astronomy and inspired by the formation process of planets. Although it has achieved good performance when solving many unconstrained optimization problems, which demonstrated its promising application potential in many real-world problems, GFA still has much room for improvement, especially when it comes to the accuracy and efficiency of the algorithm.In this research, an improved GFA algorithm called Explosion Gravitation Field Algorithm (EGFA) is proposed for unconstrained optimization problems, with the introduction of two strategies: Dust Sampling (DS) and Explosion Operation. The task of DS is to locate the space that contains the optimal solution(s) by initializing the dust population randomly in the search space; while the Explosion Operator is to improve the accuracy of solutions and decrease the probability of the algorithm falling into local optima by generating the new population around the center dust to replace the original population.A comparison of experimental results on six classical unconstrained benchmark problems with different dimensions demonstrates that the proposed EGFA outperforms the original GFA and several classical metaheuristic optimization algorithms, such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), in terms of accuracy and efficiency in lower dimensions. Additionally, the comparison of results on three real datasets indicate that EGFA performs better than the original GFA and k-means for solving clustering problems.  相似文献   

7.
The study is devoted to a concept and algorithmic realization of nonlinear mappings aimed at increasing the effectiveness of the problem solving method. Given the original input space X and a certain problem solving method M, designed is a nonlinear mapping ? so that the method operating in the transformed space M(?(X)) becomes more efficient. The nonlinear mappings realize a transformation of X through contractions and expansions of selected regions of the original space. In particular, we show how a piecewise linear mapping is optimized by using particle swarm optimization (PSO) and a suitable fitness function quantifying the objective of the problem. Several families of problems are investigated and illustrated through illustrative experimental results.  相似文献   

8.
This paper presents a novel evolutionary algorithm entitled Dynamic Partition Search Algorithm (DPSA) for global optimization problems with continuous variables. The DPSA is a population-based stochastic search algorithm in nature, which mainly consists of initialization process and evolution process. In the initialization process, the DPSA randomly generates an initial population of members from a specific search space and finds a leader. In the evolution process, the DPSA applies two groups to balance exploration ability and exploitation ability, in which one group is in charge of exploring new region via a dynamic partition strategy, and the other group relies on Cauchy distributions to exploit the region around the best member. Later, numerical experiments are conducted for 24 classical benchmark functions with 100, 1000 or even 10000 dimensions. And the performance of the proposed DPSA is compared with a state-of-the-art cooperative coevolving particle swarm optimization (CCPSO2), and two existing differential evolution (DE) algorithms. The experimental results show that DPSA has a better performance than the algorithms used for comparison, especially for high dimensional optimization problems. In addition, the numerical computational results also demonstrate that the DPSA has good scalability, and it is an effective evolutionary algorithm for solving large-scale global optimization problems.  相似文献   

9.
The paper proposes a methodology to construct cooperative metaheuristic methods for solving combinatorial optimization problems using model-based algorithms. Its distinctive feature is that the original problem is solved by a search (optimization) in the space of models. Such a search is performed on the basis of models formed by basic algorithms. Cooperative metaheuristics underlain by ant colony optimization and MH-method algorithms are developed, and the efficiency of the proposed methodology is evaluated by means of a computational experiment.  相似文献   

10.
Generally the most real world production systems are tackling several different responses and the problem is optimizing these responses concurrently. This study strives to present a new two-phase hybrid genetic based metaheuristic for optimizing nonlinear continuous multi-response problems. Premature convergence and getting stuck in local optima, which makes the algorithm time consuming, are common problems dealing with genetic algorithms (GAs). So we hybridize GA with a clustering approach and particle swarm optimization algorithm (PSO) to make a balanced relationship between time consuming and premature termination. The proposed algorithm also tries to find Ideal Points (IPs) for response functions. IPs are considered as improvement measures that determine when PSO should start. PSO based local search exploit Pareto archive solutions to enhance performance of the algorithm by expanding the search space. Since there is no standard benchmark in this field, we use two case studies from distinguished paper in multi-response optimization and compare the results with some of the mentioned algorithms in the literature. Results show the outperformance of the proposed algorithm than all of them.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号