首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Numerous real-world problems relating to ship design and shipping are characterised by combinatorially explosive alternatives as well as multiple conflicting objectives and are denoted as multi-objective combinatorial optimisation (MOCO) problems. The main problem is that the solution space is very large and therefore the set of feasible solutions cannot be enumerated one by one. Current approaches to solve these problems are multi-objective metaheuristics techniques, which fall in two categories: population-based search and trajectory-based search. This paper gives an overall view for the MOCO problems in ship design and shipping where considerable emphasis is put on evolutionary computation and the evaluation of trade-off solutions. A two-stage hybrid approach is proposed for solving a particular MOCO problem in ship design, subdivision arrangement of a ROPAX vessel. In the first stage, a multi-objective genetic algorithm method is employed to approximate the set of pareto-optimal solutions through an evolutionary optimisation process. In the subsequent stage, a higher-level decision-making approach is adopted to rank these solutions from best to worst and to determine the best solution in a deterministic environment with a single decision maker.  相似文献   

2.
The problem of obtaining relevant results in web searching has been tackled with several approaches. Although very effective techniques are currently used by the most popular search engines when no a priori knowledge on the user's desires beside the search keywords is available, in different settings it is conceivable to design search methods that operate on a thematic database of web pages that refer to a common body of knowledge or to specific sets of users. We have considered such premises to design and develop a search method that deploys data mining and optimization techniques to provide a more significant and restricted set of pages as the final result of a user search. We adopt a vectorization method based on search context and user profile to apply clustering techniques that are then refined by a specially designed genetic algorithm. In this paper we describe the method, its implementation, the algorithms applied, and discuss some experiments that has been run on test sets of web pages.  相似文献   

3.
In this paper, a content-aware approach is proposed to design multiple test conditions for shot cut detection, which are organized into a multiple phase decision tree for abrupt cut detection and a finite state machine for dissolve detection. In comparison with existing approaches, our algorithm is characterized with two categories of content difference indicators and testing. While the first category indicates the content changes that are directly used for shot cut detection, the second category indicates the contexts under which the content change occurs. As a result, indications of frame differences are tested with context awareness to make the detection of shot cuts adaptive to both content and context changes. Evaluations announced by TRECVID 2007 indicate that our proposed algorithm achieved comparable performance to those using machine learning approaches, yet using a simpler feature set and straightforward design strategies. This has validated the effectiveness of modelling of content-aware indicators for decision making, which also provides a good alternative to conventional approaches in this topic.  相似文献   

4.
The service-oriented paradigm is emerging as a new approach to heterogeneous distributed software systems composed of services accessed locally or remotely by middleware technology. How to select the optimal composited service from a set of functionally equivalent services with different quality of service (QoS) attributes has become an active focus of research in the service community. However, existing middleware solutions or approaches are inefficient as they search all solution spaces. More importantly, they inherently neglect QoS uncertainty owing to the dynamic network environment. In this paper, based on a service composition middleware framework, we propose an efficient and reliable service selection approach that attempts to select the best reliable composited service by filtering low-reliability services through the computation of QoS uncertainty. The approach first employs information theory and probability theory to abandon high-QoS-uncertainty services and downsize the solution space. A reliability fitness function is then designed to select the best reliable service for composited services. We experimented with real-world and synthetic datasets and compared our approach with other approaches. Our results show that our approach is not only fast, but also finds more reliable composited services.  相似文献   

5.
This work addresses the problem of single robot coverage and exploration in an environment with the goal of finding a specific object previously known to the robot. As limited time is a constraint of interest we cannot search from an infinite number of points. Thus, we propose a multi-objective approach for such search tasks in which we first search for a good set of positions to place the robot sensors in order to acquire information from the environment and to locate the desired object. Given the interesting properties of the Generalized Voronoi Diagram, we restrict the candidate search points along this roadmap. We redefine the problem of finding these search points as a multi-objective optimization one. NSGA-II is used as the search engine and ELECTRE I is applied as a decision making tool to decide among the trade-off alternatives. We also solve a Chinese Postman Problem to optimize the path followed by the robot in order to visit the computed search points. Simulation results show a comparison between the solution found by our method and solutions defined by other known approaches. Finally, a real robot experiment indicates the applicability of our method in practical scenarios.  相似文献   

6.
Community structure is one of the most important properties in complex networks, and the field of community detection has received an enormous amount of attention in the past several years. Many quality metrics and methods have been proposed for revealing community structures at multiple resolution levels, while most existing methods need a tunable parameter in their quality metrics to determine the resolution level in advance. In this study, a multi-objective evolutionary algorithm (MOEA) for revealing multi-resolution community structures is proposed. The proposed MOEA-based community detection algorithm aims to find a set of tradeoff solutions which represent network partitions at different resolution levels in a single run. It adopts an efficient multi-objective immune algorithm to simultaneously optimize two contradictory objective functions, Modified Ratio Association and Ratio Cut. The optimization of Modified Ratio Association tends to divide a network into small communities, while the optimization of Ratio Cut tends to divide a network into large communities. The simultaneous optimization of these two contradictory objectives returns a set of tradeoff solutions between the two objectives. Each of these solutions corresponds to a network partition at one resolution level. Experiments on artificial and real-world networks show that the proposed method has the ability to reveal community structures of networks at different resolution levels in a single run.  相似文献   

7.
Signed graphs or networks are effective models for analyzing complex social systems. Community detection from signed networks has received enormous attention from diverse fields. In this paper, the signed network community detection problem is addressed from the viewpoint of evolutionary computation. A multiobjective optimization model based on link density is newly proposed for the community detection problem. A novel multiobjective particle swarm optimization algorithm is put forward to solve the proposed optimization model. Each single run of the proposed algorithm can produce a set of evenly distributed Pareto solutions each of which represents a network community structure. To check the performance of the proposed algorithm, extensive experiments on synthetic and real-world signed networks are carried out. Comparisons against several state-of-the-art approaches for signed network community detection are carried out. The experiments demonstrate that the proposed optimization model and the algorithm are promising for community detection from signed networks.  相似文献   

8.
Hybridization of fuzzy GBML approaches for pattern classification problems   总被引:4,自引:0,他引:4  
We propose a hybrid algorithm of two fuzzy genetics-based machine learning approaches (i.e., Michigan and Pittsburgh) for designing fuzzy rule-based classification systems. First, we examine the search ability of each approach to efficiently find fuzzy rule-based systems with high classification accuracy. It is clearly demonstrated that each approach has its own advantages and disadvantages. Next, we combine these two approaches into a single hybrid algorithm. Our hybrid algorithm is based on the Pittsburgh approach where a set of fuzzy rules is handled as an individual. Genetic operations for generating new fuzzy rules in the Michigan approach are utilized as a kind of heuristic mutation for partially modifying each rule set. Then, we compare our hybrid algorithm with the Michigan and Pittsburgh approaches. Experimental results show that our hybrid algorithm has higher search ability. The necessity of a heuristic specification method of antecedent fuzzy sets is also demonstrated by computational experiments on high-dimensional problems. Finally, we examine the generalization ability of fuzzy rule-based classification systems designed by our hybrid algorithm.  相似文献   

9.
Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.  相似文献   

10.
Selection of optimum machining parameters is vital to the machining processes in order to ensure the quality of the product, reduce the machining cost, increasing the productivity and conserve resources for sustainability. Hence, in this work a posteriori multi-objective optimization algorithm named as Non-dominated Sorting Teaching–Learning-Based Optimization (NSTLBO) is applied to solve the multi-objective optimization problems of three machining processes namely, turning, wire-electric-discharge machining and laser cutting process and two micro-machining processes namely, focused ion beam micro-milling and micro wire-electric-discharge machining. The NSTLBO algorithm is incorporated with non-dominated sorting approach and crowding distance computation mechanism to maintain a diverse set of solutions in order to provide a Pareto-optimal set of solutions in a single simulation run. The results of the NSTLBO algorithm are compared with the results obtained using GA, NSGA-II, PSO, iterative search method and MOTLBO and are found to be competitive. The Pareto-optimal set of solutions for each optimization problem is obtained and reported. These Pareto-optimal set of solutions will help the decision maker in volatile scenarios and are useful for real production systems.  相似文献   

11.
Multiple sequence alignment is of central importance to bioinformatics and computational biology. Although a large number of algorithms for computing a multiple sequence alignment have been designed, the efficient computation of highly accurate and statistically significant multiple alignments is still a challenge. In this paper, we propose an efficient method by using multi-objective genetic algorithm (MSAGMOGA) to discover optimal alignments with affine gap in multiple sequence data. The main advantage of our approach is that a large number of tradeoff (i.e., non-dominated) alignments can be obtained by a single run with respect to conflicting objectives: affine gap penalty minimization and similarity and support maximization. To the best of our knowledge, this is the first effort with three objectives in this direction. The proposed method can be applied to any data set with a sequential character. Furthermore, it allows any choice of similarity measures for finding alignments. By analyzing the obtained optimal alignments, the decision maker can understand the tradeoff between the objectives. We compared our method with the three well-known multiple sequence alignment methods, MUSCLE, SAGA and MSA-GA. As the first of them is a progressive method, and the other two are based on evolutionary algorithms. Experiments on the BAliBASE 2.0 database were conducted and the results confirm that MSAGMOGA obtains the results with better accuracy statistical significance compared with the three well-known methods in aligning multiple sequence alignment with affine gap. The proposed method also finds solutions faster than the other evolutionary approaches mentioned above.  相似文献   

12.
Many database search methods have been developed for peptide identification throughout a large peptide data set. Most of these approaches attempt to build a decision function that allows the identification of an experimental spectrum. This function is built either starting from similarity measures for the database peptides to identify the most similar one to a given spectrum, or by applying useful learning techniques considering the database itself as a training data. In this paper, we propose a peptide identification method based on a similarity measure for peptide-spectrum matches. Our method takes into account peak intensity distribution and applies it in a probabilistic scoring model to rank peptide matches. The main goal of our approach is to highlight the relationship between peak intensities and peptide cleavage positions on the one hand and to show its impact on peptide identification on the other hand. To evaluate our method, a set of experiments have been undertaken into two high mass spectrum accuracy data sets. The obtained results show the effectiveness of our proposed approach.  相似文献   

13.
Some of the current best conformant probabilistic planners focus on finding a fixed length plan with maximal probability. While these approaches can find optimal solutions, they often do not scale for large problems or plan lengths. As has been shown in classical planning, heuristic search outperforms bounded length search (especially when an appropriate plan length is not given a priori). The problem with applying heuristic search in probabilistic planning is that effective heuristics are as yet lacking.In this work, we apply heuristic search to conformant probabilistic planning by adapting planning graph heuristics developed for non-deterministic planning. We evaluate a straight-forward application of these planning graph techniques, which amounts to exactly computing a distribution over many relaxed planning graphs (one planning graph for each joint outcome of uncertain actions at each time step). Computing this distribution is costly, so we apply Sequential Monte Carlo (SMC) to approximate it. One important issue that we explore in this work is how to automatically determine the number of samples required for effective heuristic computation. We empirically demonstrate on several domains how our efficient, but sometimes suboptimal, approach enables our planner to solve much larger problems than an existing optimal bounded length probabilistic planner and still find reasonable quality solutions.  相似文献   

14.
In this paper two new target setting DEA approaches are proposed. The first one is an interactive multiobjective method that at each step of the process asks the decision maker (DM) which inputs and outputs he wishes to improve, which ones are allowed to worsen and which ones should stay at their current level. The local relative priorities of these inputs and outputs changes are computed using the analytic hierarchy process (AHP). After obtaining the candidate target, the DM can update his preferences for improving, worsening or maintaining current inputs and outputs levels and obtain a new candidate target. Thus continuing, until a satisfactory operating point is computed. The second method proposed uses a lexicographic multiobjective approach in which the DM specifies a priori a set of priority levels and, using AHP, the relative importance given to the improvements of the inputs and outputs at each priority level. This second approach requires solving a series of models in order, one model for each priority level. The models do not allow for worsening of neither inputs nor outputs. After the lowest priority model has been solved the corresponding target operating point is obtained. The application of the proposed approach to a port logistics problem is presented.  相似文献   

15.
Multiple criteria decision making (MCDM) is widely used in ranking one or more alternatives from a set of available alternatives with respect to multiple criteria. Inspired by MCDM to systematically evaluate alternatives under various criteria, we propose a new fuzzy TOPSIS for evaluating alternatives by integrating using subjective and objective weights. Most MCDM approaches consider only decision maker’s subjective weights. However, the end-user attitude can be a key factor. We propose a novel approach that involves end-user into the whole decision making process. In this proposed approach, the subjective weights assigned by decision makers (DM) are normalized into a comparable scale. In addition, we also adopt end-user ratings as an objective weight based on Shannon’s entropy theory. A closeness coefficient is defined to determine the ranking order of alternatives by calculating the distances to both ideal and negative-ideal solutions. A case study is performed showing how the propose method can be used for a software outsourcing problem. With our method, we provide decision makers more information to make more subtle decisions.  相似文献   

16.
This paper focuses on a typical problem arising in serial production, where two consecutive departments must sequence their internal work, each taking into account the requirements of the other one. Even if the considered problem is inherently multi-objective, to date the only heuristic approaches dealing with this problem use single-objective formulations, and also require specific assumptions on the objective function, leaving the most general case of the problem open for innovative approaches. In this paper, we develop and compare three evolutionary algorithms for dealing with such a type of combinatorial problems. Two algorithms are designed to perform directed search by aggregating the objectives of each department in a single fitness, while a third one is designed to search for the Pareto front of non-dominated solutions. We apply the three algorithms to considerably complex case studies derived from industrial production of furniture. Firstly, we validate the effectiveness of the proposed genetic algorithms considering a simple case study for which information about the optimal solution is available. Then, we focus on more complex case studies, for which no a priori indication on the optimal solutions is available, and perform an extensive comparison of the various approaches. All the considered algorithms are able to find satisfactory solutions on large production sequences with nearly 300 jobs in acceptable computation times, but they also exhibit some complementary characteristics that suggest hybrid combinations of the various methods.  相似文献   

17.
Segmentation is considered the central part of an image processing system due to its high influence on the posterior image analysis. In recent years, the segmentation of magnetic resonance (MR) images has attracted the attention of the scientific community with the objective of assisting the diagnosis in different brain diseases. From several techniques, thresholding represents one of the most popular methods for image segmentation. Currently, an extensive amount of contributions has been proposed in the literature, where thresholding values are obtained by optimizing relevant criteria such as the cross entropy. However, most of such approaches are computationally expensive, since they conduct an exhaustive search strategy for obtaining the optimal thresholding values. This paper presents a general method for image segmentation. To estimate the thresholding values, the proposed approach uses the recently published evolutionary method called the Crow Search Algorithm (CSA) which is based on the behavior in flocks of crows. Different to other optimization techniques used for segmentation proposes, CSA presents a better performance, avoiding critical flaws such as the premature convergence to sub-optimal solutions and the limited exploration-exploitation balance in the search strategy. Although the proposed method can be used as a generic segmentation algorithm, its characteristics allow obtaining excellent results in the automatic segmentation of complex MR images. Under such circumstances, our approach has been evaluated using two sets of benchmark images; the first set is composed of general images commonly used in the image processing literature, while the second set corresponds to MR brain images. Experimental results, statistically validated, demonstrate that the proposed technique obtains better results in terms of quality and consistency.  相似文献   

18.
Optimizing the operation of cooperative multi-agent systems that can deal with large and realistic problems has become an important focal area of research in the multi-agent community. In this paper, we first present a new model, the OC-DEC-MDP (Opportunity Cost Decentralized Markov Decision Process), that allows us to represent large multi-agent decision problems with temporal and precedence constraints. Then, we propose polynomial algorithms to efficiently solve problems formalized by OC-DEC-MDPs. The problems we deal with consist of a set of agents that have to execute a set of tasks in a cooperative way. The agents cannot communicate during task execution and they must respect resource and temporal constraints. Our approach is based on Decentralized Markov Decision Processes (DEC-MDPs) and uses the concept of opportunity cost borrowed from economics to obtain approximate control policies. Experimental results show that our approach produces good quality solutions for complex problems which are out of reach of existing approaches.  相似文献   

19.
When classifying search queries into a set of target categories, machine learning based conventional approaches usually make use of external sources of information to obtain additional features for search queries and training data for target categories. Unfortunately, these approaches rely on large amount of training data for high classification precision. Moreover, they are known to suffer from inability to adapt to different target categories which may be caused by the dynamic changes observed in both Web topic taxonomy and Web content. In this paper, we propose a feature-free classification approach using semantic distance. We analyze queries and categories themselves and utilizes the number of Web pages containing both a query and a category as a semantic distance to determine their similarity. The most attractive feature of our approach is that it only utilizes the Web page counts estimated by a search engine to provide the search query classification with respectable accuracy. In addition, it can be easily adaptive to the changes in the target categories, since machine learning based approaches require extensive updating process, e.g., re-labeling outdated training data, re-training classifiers, to name a few, which is time consuming and high-cost. We conduct experimental study on the effectiveness of our approach using a set of rank measures and show that our approach performs competitively to some popular state-of-the-art solutions which, however, frequently use external sources and are inherently insufficient in flexibility.  相似文献   

20.
In the Team Orienteering Problem (TOP), a team of vehicles attempts to collect rewards at a given number of stops within a specified time frame. Once a vehicle visits a stop and collects its reward, no other vehicles can collect the reward again. Typically, a team cannot visit all stops and therefore has to identify the “best” set of stops to visit in order to maximize total rewards. We propose a large neighborhood search method with three improvement algorithms: a local search improvement, a shift and insertion improvement, and replacement improvement. Our proposed approach can find the best known solutions for 386 of the 387 benchmark instances, for the one instance which our solution is not the current best it is only varies by one from the best. Our approach outperforms all the previous approaches in terms of solution quality and computation time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号