首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Nature is the principal source for proposing new optimization methods such as genetic algorithms (GA) and simulated annealing (SA) methods. All traditional evolutionary algorithms are heuristic population-based search procedures that incorporate random variation and selection. The main contribution of this study is that it proposes a novel optimization method that relies on one of the theories of the evolution of the universe; namely, the Big Bang and Big Crunch Theory. In the Big Bang phase, energy dissipation produces disorder and randomness is the main feature of this phase; whereas, in the Big Crunch phase, randomly distributed particles are drawn into an order. Inspired by this theory, an optimization algorithm is constructed, which will be called the Big Bang–Big Crunch (BB–BC) method that generates random points in the Big Bang phase and shrinks those points to a single representative point via a center of mass or minimal cost approach in the Big Crunch phase. It is shown that the performance of the new (BB–BC) method demonstrates superiority over an improved and enhanced genetic search algorithm also developed by the authors of this study, and outperforms the classical genetic algorithm (GA) for many benchmark test functions.  相似文献   

2.
This paper proposes an optimal power control strategy for inverter-based Distributed Generation (DG) units in autonomous microgrids. It consists of power, voltage, and current controllers with Proportional-Integral (PI) regulators. The droop concept is used for the power control strategy. Static parameters in PI regulators may not ensure the most optimal solution due to inevitable changes happening in microgrid configuration and loads. In the proposed method, after occurring a load change in a standalone microgrid, parameters of the PI controller are dynamically adjusted to get the most optimal operating point that satisfies objective functions. The optimization problem is formulated as a multi-objective programming with objective functions of minimizing overshoot/undershoot, settling time, rise time, and Integral Time Absolute Error (ITAE) in the output voltage. These objective functions are combined using fuzzy memberships. The Hybrid Big Bang-Big Crunch algorithm (HBB-BC) is used to solve the optimization problem. The proposed methodology is simulated on a case study and according to obtained results, the suggested tuning of PI parameters leads to a better voltage response than previous methods. The case study is also solved using the Particle Swarm Optimization (PSO) and Big Bang-Big Crunch (BB-BC) algorithms and it is found that the HBB-BC gives a better solution than the PSO and BB-BC.  相似文献   

3.
The location of knot points and estimation of the number of knots are undoubtedly known as one of the most difficult problems in B-Spline curve approximation. In the literature, different researchers have been seen to use more than one optimization algorithm in order to solve this problem. In this paper, Big Bang-Big Crunch method (BB-BC) which is one of the evolutionary based optimization algorithms was introduced and then the approximation of B-Spline curve knots was conducted by this method. The technique of reverse engineering was implemented for the curve knot approximation. The detection of knot locations and the number of knots were randomly selected in the curve approximation which was performed by using BB-BC method. The experimental results were carried out by utilizing seven different test functions for the curve approximation. The performance of BB-BC algorithm was examined on these functions and their results were compared with the earlier studies performed by the researchers. In comparison with the other studies, it was observed that though the number of the knot in BB-BC algorithm was high, this algorithm approximated the B-Spline curves at the rate of minor error.  相似文献   

4.
Voltage and frequency regulation is one of the most vital issues in autonomous microgrids to ensure an acceptable electric power quality supply to customers. In this paper, a real-time control structure including power, voltage, and current control loops is proposed for microgrid inverters to restore voltage and frequency of the system after the initiation and load changes. The Proportional-Integral (PI) gains of the voltage controller are optimized in a real-time basis after a perturbation in the microgrid to have a fast and smooth response and a more stable system. The current controller produces Space Vector Pulse Width Modulation command signals to be fed into the three-leg inverter. The multi-objective optimization problem has objective functions of voltage overshoot/undershoot, rise time, settling time, and Integral Time Absolute Error (ITAE). The modified Multi-Objective Hybrid Big Bang-Bing Crunch (MOHBB-BC) algorithm is employed as one of efficient evolutionary algorithms in order to solve the optimization problem. The MOHBB-BC method obtains a set of Pareto optimal solutions; a fuzzy decision maker is used to pick up the most preferred Pareto solution as the final solution of the problem. Results from testing the control strategy on a case study are discussed and compared with previous works; according to them, the proposed method is able to obtain dynamic PI regulator gains to have a more appropriate response.  相似文献   

5.
This paper presents a review of recently developed physics‐based search and optimization algorithms that have been inspired by natural phenomena. They include Big Bang–Big Crunch, black hole search, galaxy‐based search, artificial physics optimization, electromagnetism optimization, charged system search, colliding bodies optimization, and particle collision algorithm.  相似文献   

6.
The control of the X-Z inverted pendulum is a challenging work since the X-Z inverted pendulum is an underactuated, open-loop unstable and multi-input-multi-output (MIMO) nonlinear system. In this paper, we will present a novel state transformation method for the X-Z inverted pendulum and Big Bang–Big Crunch (BBBC) optimized hierarchical sliding-mode control (HSMC) structure. We will firstly show that through the proposed transformation, the model of the X-Z inverted pendulum can be transformed to a typical underactuated form. Thus, based on the obtained system model, the hierarchical sliding-mode control (HSMC) can be directly applied in the trajectory tracking control of the X-Z inverted pendulum. Then, to ensure a convergent performance of the auxiliary sliding surfaces, the BBBC method is applied to obtain the optimal coupling factors for the HSMC. The control performance of the proposed BBBC based HSMC structure is compared with that of the present SMC and the HSMC with particle swarm optimization (PSO). Simulation results show the effectiveness of the proposed controllers for the X-Z inverted pendulum.  相似文献   

7.
As the extension of the linear inverted pendulum (LIP) and planar inverted pendulum (PIP), this paper proposes a novel spatial inverted pendulum (SIP). The SIP is the most general inverted pendulum (IP) than any existing IP. The model of the SIP is presented for the first time. The SIP inherits all the characteristics of the LIP and the PIP, which is a nonlinear, unstable and underactuated system. The SIP has five degrees of motion freedom and three control forces. Thus, it is a multiple-input and multiple-output (MIMO) system with nonlinear dynamics. To realize the spatial trajectory tracking of the SIP, the control structure with five PID controllers will be designed. The parameter tuning of the multiple PIDs is a challenging work for the proposed SIP model. To alleviate the difficulties of the parameter tuning for the multiple PID controllers, optimal PIDs can be achieved with the help of Big Bang-Big Crunch (BBBC) optimization. The BBBC algorithm can successfully optimize the parameters of the multiple PID controllers with high convergence speed. The optimization performance index of the BBBC algorithm is compared with that of the particle swarm optimization (PSO). Simulation results certify the rightness and effectiveness of the proposed control and optimization methods.   相似文献   

8.
Research associated with Big Data in the Cloud will be important topic over the next few years. The topic includes work on demonstrating architectures, applications, services, experiments and simulations in the Cloud to support the cases related to adoption of Big Data. A common approach to Big Data in the Cloud to allow better access, performance and efficiency when analysing and understanding the data is to deliver Everything as a Service. Organisations adopting Big Data this way find the boundaries between private clouds, public clouds and Internet of Things (IoT) can be very thin. Volume, variety, velocity, veracity and value are the major factors in Big Data systems but there are other challenges to be resolved.The papers of this special issue address a variety of issues and concerns in Big Data, including: searching and processing Big Data, implementing and modelling event and workflow systems, visualisation modelling and simulation and aspects of social media.  相似文献   

9.
通过模拟宇宙大爆炸过程构造一种新型智能优化算法——宇宙大爆炸搜索BBS算法。受经典最优化理论启发,提出"近似梯度"概念并构造"近似梯度爆炸"算子,得到基于"近似梯度"的宇宙大爆炸搜索算法AGBBS。AGBBS保留了基本BBS算法把候选解分布的均匀性和随机性相结合的优良特性,且充分利用了爆炸碎片的信息,提高了算法的搜索能力;通过改进一些启发性算子,提高了算法的收敛性和解的精度。通过对12个Benchmark标准函数的测试及与其他算法对比,验证了该算法的有效性和改进算法的鲁棒性。  相似文献   

10.
Query optimization in Big Data becomes a promising research direction due to the popularity of massive data analytical systems such as Hadoop system. The query optimization is getting hard to efficiently execute JOIN queries on top of Hadoop query language, Hive, over limited Big Data storages. According to our previous work, HiveQL Optimization for JOIN query over Multi-session Environment (HOME) system has been introduced over Hadoop system to improve its performance by storing the intermediate results to avoid repeated computations. Time overheads and Big Data storages limitation are considered the main drawback of the HOME system, especially in the case of using additional physical storages or renting extra virtualized storages. In this paper, an index-based system for reusing data called indexing HiveQL Optimization for JOIN over Multi-session Big Data Environment (iHOME) is proposed to overcome HOME overheads by storing only the indexes of the joined rows instead of storing the full intermediate results directly. Moreover, the proposed iHOME system addresses eight cases of JOIN queries which classified into three groups; Similar-to-iHOME, Compute-on-iHOME, and Filter-of-iHOME. According to the experimental results of the iHOME system using TPC-H benchmark, it is found that the execution time of eight JOIN queries using iHOME on Hive has been reduced. Also, the stored data size in the iHOME system is reduced relative to the HOME system, as well as, the Big Data storage is saved. So, by increasing stored data size, the iHOME system guarantees the space scalability and overcomes the storage limitation.  相似文献   

11.
In recent years, huge volumes of healthcare data are getting generated in various forms. The advancements made in medical imaging are tremendous owing to which biomedical image acquisition has become easier and quicker. Due to such massive generation of big data, the utilization of new methods based on Big Data Analytics (BDA), Machine Learning (ML), and Artificial Intelligence (AI) have become essential. In this aspect, the current research work develops a new Big Data Analytics with Cat Swarm Optimization based deep Learning (BDA-CSODL) technique for medical image classification on Apache Spark environment. The aim of the proposed BDA-CSODL technique is to classify the medical images and diagnose the disease accurately. BDA-CSODL technique involves different stages of operations such as preprocessing, segmentation, feature extraction, and classification. In addition, BDA-CSODL technique also follows multi-level thresholding-based image segmentation approach for the detection of infected regions in medical image. Moreover, a deep convolutional neural network-based Inception v3 method is utilized in this study as feature extractor. Stochastic Gradient Descent (SGD) model is used for parameter tuning process. Furthermore, CSO with Long Short-Term Memory (CSO-LSTM) model is employed as a classification model to determine the appropriate class labels to it. Both SGD and CSO design approaches help in improving the overall image classification performance of the proposed BDA-CSODL technique. A wide range of simulations was conducted on benchmark medical image datasets and the comprehensive comparative results demonstrate the supremacy of the proposed BDA-CSODL technique under different measures.  相似文献   

12.
In order to analyze complex networks to find significant communities, several methods have been proposed in the literature. Modularity optimization is an interesting and valuable approach for detection of network communities in complex networks. Due to characteristics of the problem dealt with in this study, the exact solution methods consume much more time. Therefore, we propose six metaheuristic optimization algorithms, which each contain a modularity optimization approach. These algorithms are the original Bat Algorithm (BA), Gravitational Search Algorithm (GSA), modified Big Bang–Big Crunch algorithm (BB-BC), improved Bat Algorithm based on the Differential Evolutionary algorithm (BADE), effective Hyperheuristic Differential Search Algorithm (HDSA) and Scatter Search algorithm based on the Genetic Algorithm (SSGA). Four of these algorithms (HDSA, BADE, SSGA, BB-BC) contain new methods, whereas the remaining two algorithms (BA and GSA) use original methods. To clearly demonstrate the performance of the proposed algorithms when solving the problems, experimental studies were conducted using nine real-world complex networks − five of which are social networks and the rest of which are biological networks. The algorithms were compared in terms of statistical significance. According to the obtained test results, the HDSA proposed in this study is more efficient and competitive than the other algorithms that were tested.  相似文献   

13.
The Rotation Forest classifier is a successful ensemble method for a wide variety of data mining applications. However, the way in which Rotation Forest transforms the feature space through PCA, although powerful, penalizes training and prediction times, making it unfeasible for Big Data. In this paper, a MapReduce Rotation Forest and its implementation under the Spark framework are presented. The proposed MapReduce Rotation Forest behaves in the same way as the standard Rotation Forest, training the base classifiers on a rotated space, but using a functional implementation of the rotation that enables its execution in Big Data frameworks. Experimental results are obtained using different cloud-based cluster configurations. Bayesian tests are used to validate the method against two ensembles for Big Data: Random Forest and PCARDE classifiers. Our proposal incorporates the parallelization of both the PCA calculation and the tree training, providing a scalable solution that retains the performance of the original Rotation Forest and achieves a competitive execution time (in average, at training, more than 3 times faster than other PCA-based alternatives). In addition, extensive experimentation shows that by setting some parameters of the classifier (i.e., bootstrap sample size, number of trees, and number of rotations), the execution time is reduced with no significant loss of performance using a small ensemble.  相似文献   

14.
This survey presents the concept of Big Data. Firstly, a definition and the features of Big Data are given. Secondly, the different steps for Big Data data processing and the main problems encountered in big data management are described. Next, a general overview of an architecture for handling it is depicted. Then, the problem of merging Big Data architecture in an already existing information system is discussed. Finally this survey tackles semantics (reasoning, coreference resolution, entity linking, information extraction, consolidation, paraphrase resolution, ontology alignment) in the Big Data context.  相似文献   

15.
Big data denotes the variety, velocity, and massive volume of data. Existing databases are unsuitable to store big data owing to its high volume. Cloud computing is an optimal solution to process and store big data. However, the significant issue lies in handling access control and privacy, wherein the data should be encrypted and unauthorized user access must be restricted through efficient access control. Attribute-based encryption (ABE) permits users to encrypt and decrypt data. However, for the policy to work in practical scenarios, the attributes must be repeated. In the case of specific policies, it is not possible to avoid attribute repetition even after the application of Boolean optimization approaches to obtain a Boolean formula. For these policies, there exists a variety of evaluated secret shares for the repeated attributes. Therefore, the calculation of cipher text for these irreducible policies seems to be lengthy and computationally intensive. To address this problem, an improved meta-heuristic-based repeated attributes optimization on cipher-text policy-ABE (CP-ABE) is developed in this study. Here, the improved meta-heuristic concept is developed in the encryption phase, which returns the optimized single share value of each repeated attribute after considering all the attribute shares. The optimization process not only minimizes the encryption cost but also the communication cost. Herein, the improved sun flower optimization (SFO), called the newly updated SFO (NU-SFO) is used to perform the repeated attribute optimization in CP-ABE. Finally, the performance evaluation confirms the reliability and robustness of the developed scheme through comparisons with traditional constructions.  相似文献   

16.
– Ant System     
Ant System, the first Ant Colony Optimization algorithm, showed to be a viable method for attacking hard combinatorial optimization problems. Yet, its performance, when compared to more fine-tuned algorithms, was rather poor for large instances of traditional benchmark problems like the Traveling Salesman Problem. To show that Ant Colony Optimization algorithms could be good alternatives to existing algorithms for hard combinatorial optimization problems, recent research in this area has mainly focused on the development of algorithmic variants which achieve better performance than Ant System.In this paper, we present – Ant System ( ), an Ant Colony Optimization algorithm derived from Ant System. differs from Ant System in several important aspects, whose usefulness we demonstrate by means of an experimental study. Additionally, we relate one of the characteristics specific to — that of using a greedier search than Ant System — to results from the search space analysis of the combinatorial optimization problems attacked in this paper. Our computational results on the Traveling Salesman Problem and the Quadratic Assignment Problem show that is currently among the best performing algorithms for these problems.  相似文献   

17.
In this paper the response surface methodology (RSM) and stochastic optimization (SO) are compared with regard to their efficiency and applicability in crashworthiness design. Optimization of simple analytic expressions and optimization of a front rail structure are the applications used to assess the respective qualities of both methods. A low detail vehicle structure is optimized to demonstrate the applicability of the methods in engineering practice. The investigations reveal that RSM is better compared to SO for fewer than 10–15 design variables. The convergence behaviour of SO improves compared to RSM when the number of design variables is increased. A novel zooming method is proposed which improves the convergence behaviour. A combination of both the RSM and the SO is efficient, stochastic optimization could be used in order to determine appropriate starting points for an RSM optimization, which continues the optimization. Two examples are investigated using this combined method.  相似文献   

18.
Lately, the Internet of Things (IoT) application requires millions of structured and unstructured data since it has numerous problems, such as data organization, production, and capturing. To address these shortcomings, big data analytics is the most superior technology that has to be adapted. Even though big data and IoT could make human life more convenient, those benefits come at the expense of security. To manage these kinds of threats, the intrusion detection system has been extensively applied to identify malicious network traffic, particularly once the preventive technique fails at the level of endpoint IoT devices. As cyberattacks targeting IoT have gradually become stealthy and more sophisticated, intrusion detection systems (IDS) must continually emerge to manage evolving security threats. This study devises Big Data Analytics with the Internet of Things Assisted Intrusion Detection using Modified Buffalo Optimization Algorithm with Deep Learning (IDMBOA-DL) algorithm. In the presented IDMBOA-DL model, the Hadoop MapReduce tool is exploited for managing big data. The MBOA algorithm is applied to derive an optimal subset of features from picking an optimum set of feature subsets. Finally, the sine cosine algorithm (SCA) with convolutional autoencoder (CAE) mechanism is utilized to recognize and classify the intrusions in the IoT network. A wide range of simulations was conducted to demonstrate the enhanced results of the IDMBOA-DL algorithm. The comparison outcomes emphasized the better performance of the IDMBOA-DL model over other approaches.  相似文献   

19.
The quality of the data is directly related to the quality of the models drawn from that data. For that reason, many research is devoted to improve the quality of the data and to amend errors that it may contain. One of the most common problems is the presence of noise in classification tasks, where noise refers to the incorrect labeling of training instances. This problem is very disruptive, as it changes the decision boundaries of the problem. Big Data problems pose a new challenge in terms of quality data due to the massive and unsupervised accumulation of data. This Big Data scenario also brings new problems to classic data preprocessing algorithms, as they are not prepared for working with such amounts of data, and these algorithms are key to move from Big to Smart Data. In this paper, an iterative ensemble filter for removing noisy instances in Big Data scenarios is proposed. Experiments carried out in six Big Data datasets have shown that our noise filter outperforms the current state-of-the-art noise filter in Big Data domains. It has also proved to be an effective solution for transforming raw Big Data into Smart Data.  相似文献   

20.
Optimization of a two-level microprogrammed automaton is considered. The optimization method is based on equivalent transformation of the automaton, grouping the automaton states into equivalence classes with special encoding.Translated from Kibernetika i Sistemnyi Analiz, No. 5, pp. 180–184, September–October, 1991.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号