首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We show that a Simple Stochastic Game (SSG) can be formulated as an LP-type problem. Using this formulation, and the known algorithm of Sharir and Welzl [SW] for LP-type problems, we obtain the first strongly subexponential solution for SSGs (a strongly subexponential algorithm has only been known for binary SSGs [L]). Using known reductions between various games, we achieve the first strongly subexponential solutions for Discounted and Mean Payoff Games. We also give alternative simple proofs for the best known upper bounds for Parity Games and binary SSGs. To the best of our knowledge, the LP-type framework has been used so far only in order to yield linear or close to linear time algorithms for various problems in computational geometry and location theory. Our approach demonstrates the applicability of the LP-type framework in other fields, and for achieving subexponential algorithms.  相似文献   

2.
General-purpose computing on graphics processing unit (GPGPU) has been adopted to accelerate the running of applications which require long execution time in various problem domains. Tabu Search belonging to meta-heuristics optimization has been used to find a suboptimal solution for NP-hard problems within a more reasonable time interval. In this paper, we have investigated in how to improve the performance of Tabu Search algorithm on GPGPU and took the permutation flow shop scheduling problem (PFSP) as the example for our study. In previous approach proposed recently for solving PFSP by Tabu Search on GPU, all the job permutations are stored in global memory to successfully eliminate the occurrences of branch divergence. Nevertheless, the previous algorithm requires a large amount of global memory space, because of a lot of global memory access resulting in system performance degradation. We propose a new approach to address the problem. The main contribution of this paper is an efficient multiple-loop struct to generate most part of the permutation on the fly, which can decrease the size of permutation table and significantly reduce the amount of global memory access. Computational experiments on problems according with benchmark suite for PFSP reveal that the best performance improvement of our approach is about 100%, comparing with the previous work.  相似文献   

3.
Memory allocation has a major influence on multiuser systems, cloud-based services, virtual machines, and other computer systems. Memory allocation is a process that assigns physical or virtual memory space to programs and services as efficiently and quickly as possible. Economical memory allocation management needs allocation strategies with minimum wastage. In this paper, we introduce a new memory allocation algorithm based on sequential fits and zoning for on-demand (online) cloud services. The memory is divided into multiple zones, where a subgroup of relative request sizes compete in reverse order. We use simulation to compare our new mechanism with existing memory allocation methods that have been deployed using Amazon Elastic Compute Cloud as a test bed. The proposed algorithm is more efficient, and the average saving for the normalized revenue loss is about 7% better than best-fit and 15% better than first-fit memory allocation. In addition, we show that proposed algorithm is robust and faster and has a fairness index that is superior to that of existing techniques.  相似文献   

4.
《国际计算机数学杂志》2012,89(12):1731-1741
In this paper we address the problem of minimizing the weighted sum of makespan and maximum tardiness in an m-machine flow shop environment. This is a NP-hard problem in the strong sense. An attempt has been made to solve this problem using a metaheuristic called Greedy Randomized Adaptive Search Procedure (GRASP). GRASP is a competitive algorithm and is a meta-heuristic for solving combinatorial optimization problems. We have customized the basic concepts of GRASP algorithm to solve a bicriteria flow shop problem and a new algorithm named B-GRASP (Bicriteria GRASP algorithm) is proposed. The new proposed algorithm is evaluated using benchmark problems taken from Taillard and compared with the existing simulated annealing based heuristic developed by Chakravarthy and Rajendran. Computational experiments indicate that the proposed algorithm is much better than the existing one in all cases.  相似文献   

5.
Genetic programming (GP) is a powerful optimization algorithm that has been applied to a variety of problems. This algorithm can, however, suffer from problems arising from the fact that a crossover, which is a main genetic operator in GP, randomly selects crossover points, and so building blocks may be destroyed by the action of this operator. In recent years, evolutionary algorithms based on probabilistic techniques have been proposed in order to overcome this problem. In the present study, we propose a new program evolution algorithm employing a Bayesian network for generating new individuals. It employs a special chromosome called the expanded parse tree, which significantly reduces the size of the conditional probability table (CPT). Prior prototype tree-based approaches have been faced with the problem of huge CPTs, which not only require significant memory resources, but also many samples in order to construct the Bayesian network. By applying the present approach to three distinct computational experiments, the effectiveness of this new approach for dealing with deceptive problems is demonstrated.   相似文献   

6.
In this paper, we have considered a class of single machine job scheduling problems where the objective is to minimize the weighted sum of earliness–tardiness penalties of jobs. The weights are job-independent but they depend on whether a job is early or tardy. The restricted version of the problem where the common due date is smaller than a critical value, is known to be NP-complete. While dynamic programming formulation runs out of memory for large problem instances, depth-first branch-and-bound formulation runs slow for large problems since it uses a tree search space. In this paper, we have suggested an algorithm to optimally solve large instances of the restricted version of the problem. The algorithm uses a graph search space. Unlike dynamic programming, the algorithm can output optimal solutions even when available memory is limited. It has been found to run faster than dynamic programming and depth-first branch-and-bound formulations and can solve much larger instances of the problem in reasonable time. New upper and lower bounds have been proposed and used. Experimental findings are given in detail.Scope and purposeA class of single machine problems arising out of scheduling jobs in JIT environment has been considered in this paper. The objective is to minimize the total weighted earliness–tardiness penalties of jobs. In this paper, we have presented a new algorithm and conducted extensive empirical runs to show that the new algorithm performs much better than the existing approaches in solving large instances of the problem.  相似文献   

7.
In this paper, we study the job shop scheduling problem with the objective of minimizing the total weighted tardiness. We propose a hybrid shifting bottleneck-tabu search (SB-TS) algorithm by replacing the re-optimization step in the shifting bottleneck (SB) algorithm by a tabu search (TS). In terms of the shifting bottleneck heuristic, the proposed tabu search optimizes the total weighted tardiness for partial schedules in which some machines are currently assumed to have infinite capacity. In the context of tabu search, the shifting bottleneck heuristic features a long-term memory which helps to diversify the local search. We exploit this synergy to develop a state-of-the-art algorithm for the job shop total weighted tardiness problem (JS-TWT). The computational effectiveness of the algorithm is demonstrated on standard benchmark instances from the literature.  相似文献   

8.
从命题逻辑的需求描述到状态转移图的形式规格   总被引:1,自引:0,他引:1  
信息处理系统的规模和复杂化,需要有效设计高可靠性系统的形式化的规模描述方法,本文针对以上功能,提出了基于命题逻辑的信息处理系统的新的需求描述方法,描述了通过使用逻辑PetriNet(LPN),把命题逻辑的需求描述变换成状态转移图的过程,并且给出了由LPN自动生成状态的转换图的算法。  相似文献   

9.
This article presents a novel variance-based harmony search algorithm (VHS) for solving optimization problems. VHS incorporates the concepts borrowed from the invasive weed optimization technique to improve the performance of the harmony search algorithm (HS). This eliminates the main problem of constant parameter setting in the algorithm proposed recently and named as explorative HS. It uses the variance of a current population as well as presents a solution vector to improvise the harmony memory. In addition, the dynamic pitch adjustment operator is used to avoid solution oscillation. The proposed algorithm is evaluated on 14 standard benchmark functions of various characteristics. The performance of the proposed algorithm is investigated and compared with classical HS, an improved version of HS, the global best HS, self-adaptive HS, explorative HS, and the recently proposed state-of-art gravitational search algorithm. Experimental results reveal that the proposed algorithm outperforms the above-mentioned approaches. The effects of scalability, noise, harmony memory size, and harmony memory consideration rate have also been investigated with the proposed algorithm. The proposed algorithm is then employed for a data clustering problem. Four real-life datasets selected from the UCI machine learning repository have been used. The results indicate that the VHS-based clustering outperforms the existing well-known clustering algorithms.  相似文献   

10.
In this paper we revisit and extend the algorithm for the cyclic project scheduling problem which was originally proposed by Romanovskii (1967). While the algorithm has been derived for fixed numerical data, we show how it can be extended to handle the problems with interval data. We also propose a new algorithm for the cyclic scheduling problem with interval data that extends the parametric method developed by Megiddo (1979) and runs in strongly polynomial time.  相似文献   

11.
Several algorithms have been proposed in the literature for building decision trees (DT) for large datasets, however almost all of them have memory restrictions because they need to keep in main memory the whole training set, or a big amount of it, and such algorithms that do not have memory restrictions, because they choose a subset of the training set, need extra time for doing this selection or have parameters that could be very difficult to determine. In this paper, we introduce a new algorithm that builds decision trees using a fast splitting attribute selection (DTFS) for large datasets. The proposed algorithm builds a DT without storing the whole training set in main memory and having only one parameter but being very stable regarding to it. Experimental results on both real and synthetic datasets show that our algorithm is faster than three of the most recent algorithms for building decision trees for large datasets, getting a competitive accuracy.  相似文献   

12.
针对低碳柔性作业车间调度问题(flexible job shop scheduling problem,FJSP),提出一种新型蛙跳算法(shuffled frog leaping algorithm,SFLA)以总碳排放最小化,该算法运用记忆保留搜索所得一定数量的最优解,并采取基于种群和记忆的种群划分方法,应用新的搜索策略如全局搜索与局部搜索的协调优化以实现模因组内的搜索,取消种群重组使算法得到简化.采用混合遗传算法和教–学优化算法作为对比算法,大量仿真对比实验验证了SFLA对于求解低碳FJSP具有较强的搜索能力和竞争力.  相似文献   

13.
Recently several hybrid methods combining exact algorithms and heuristics have been proposed for solving hard combinatorial optimization problems. In this paper, we propose new iterative relaxation-based heuristics for the 0-1 Mixed Integer Programming problem (0-1 MIP), which generate a sequence of lower and upper bounds. The upper bounds are obtained from relaxations of the problem and refined iteratively by including pseudo-cuts in the problem. Lower bounds are obtained from the solving of restricted problems generated by exploiting information from relaxation and memory of the search process. We propose a new semi-continuous relaxation (SCR) that relaxes partially the integrality constraints to force the variables values close to 0 or 1. Several variants of the new iterative semi-continuous relaxation based heuristic can be designed by a given update procedure of multiplier of SCR. These heuristics are enhanced by using local search procedure to improve the feasible solution found and rounding procedure to restore infeasibility if possible. Finally we present computational results of the new methods to solve the multiple-choice multidimensional knapsack problem which is an NP-hard problem, even to find a feasible solution. The approach is evaluated on a set of problem instances from the literature, and compared to the results reached by both CPLEX solver and an efficient column generation-based algorithm. The results show that our algorithms converge rapidly to good lower bounds and visit new best-known solutions.  相似文献   

14.
Clustering problem is an unsupervised learning problem. It is a procedure that partition data objects into matching clusters. The data objects in the same cluster are quite similar to each other and dissimilar in the other clusters. Density-based clustering algorithms find clusters based on density of data points in a region. DBSCAN algorithm is one of the density-based clustering algorithms. It can discover clusters with arbitrary shapes and only requires two input parameters. DBSCAN has been proved to be very effective for analyzing large and complex spatial databases. However, DBSCAN needs large volume of memory support and often has difficulties with high-dimensional data and clusters of very different densities. So, partitioning-based DBSCAN algorithm (PDBSCAN) was proposed to solve these problems. But PDBSCAN will get poor result when the density of data is non-uniform. Meanwhile, to some extent, DBSCAN and PDBSCAN are both sensitive to the initial parameters. In this paper, we propose a new hybrid algorithm based on PDBSCAN. We use modified ant clustering algorithm (ACA) and design a new partitioning algorithm based on ‘point density’ (PD) in data preprocessing phase. We name the new hybrid algorithm PACA-DBSCAN. The performance of PACA-DBSCAN is compared with DBSCAN and PDBSCAN on five data sets. Experimental results indicate the superiority of PACA-DBSCAN algorithm.  相似文献   

15.
In this article we employ utility theory to determine the new state of working memory, after a group of rules have been fired in parallel in a fuzzy expert system. This is argued to be analogous to using utility theory in economics to determine what best action to take in decision making under risk. A class of utility functions is described to compute the utility of information in working memory similar to computing the utility of wealth in economics. We discuss a memory update algorithm (fuzzy truth maintenance system) that will produce the unique undominated state of working memory after a group of rules have executed under the parallel mode of operation. the fuzzy expert system is called risk-averse when it uses this memory update algorithm. We call the system riskseeking, or risk-taking, when certain actions are allowed to operate outside the memory update algorithm. Experimenting with a risk-taking expert system is an exciting new idea which will exist in our new fuzzy expert system shell FESS II.  相似文献   

16.
It is well known that the delay-constrained least-cost (DCLC) routing problem is NP-complete, hence various heuristic methods have been proposed for this problem. However, these heuristic methods have poor scalability as the network scale increases. In this paper we propose a new method based on the Markov Decision Process (MDP) theory and the hierarchical routing scheme to address the scalability issue of the DCLC routing problem. We construct a new two-level hierarchy MDP model and apply an infinite-horizon discounted cost model to the upper level for the end-to-end inter-domain link selection. Since the infinite-horizon discounted cost model is independent of network scale, the scalability problem is resolved. With the proposed model, we further give the algorithm of solving the optimal policy to obtain the DCLC routing. Simulation results show that the proposed method improves the scalability significantly.  相似文献   

17.
False sharing reduces system performance in distributed shared memory systems. A major impediment to solving the problem of false sharing has been that no satisfactory definition for this problem exists. In this paper we provide definitions for several types of degenerate sharing, including false sharing. We also provide an algorithm that computes the cost of unnecessary coherence (false coherence) in a shared memory system using a single memory trace. Finally, we provide a counterintuitive example demonstrating that the elimination of degenerate sharing can reduce performance.  相似文献   

18.
The important challenge of evaluating XPath queries over XML streams has sparked much interest in the past few years. A number of algorithms have been proposed, supporting wider fragments of the query language, and exhibiting better performance and memory utilization. Nevertheless, all the algorithms known to date use a prohibitively large amount of memory for certain types of queries. A natural question then is whether this memory bottleneck is inherent or just an artifact of the proposed algorithms.In this paper we initiate the first systematic and theoretical study of lower bounds on the amount of memory required to evaluate XPath queries over XML streams. We present a general lower bound technique, which given a query, specifies the minimum amount of memory that any algorithm evaluating the query on a stream would need to incur. The lower bounds are stated in terms of new graph-theoretic properties of queries. The proofs are based on tools from communication complexity.We then exploit insights learned from the lower bounds to obtain a new algorithm for XPath evaluation on streams. The algorithm uses space close to the optimum. Our algorithm deviates from the standard paradigm of using automata or transducers, thereby avoiding the need to store large transition tables.  相似文献   

19.
背包问题无存储冲突的并行三表算法   总被引:4,自引:0,他引:4  
背包问题属于经典的NP难问题,在信息密码学和数论等研究中具有极重要的应用,将求解背包问题著名的二表算法的设计思想应用于三表搜索中,利用分治策略和无存储冲突的最优归并算法,提出一种基于EREW-SIMD共享存储模型的并行三表算法,算法使用O(2^n/4)个处理机单元和O(2^3n/8)的共享存储空间,在O(2^3n/8)时间内求解n维背包问题.将提出的算法与已有文献结论进行的对比分析表明:文中算法明显改进了现有文献的研究结果,是一种可在小于O(2^n/2)的硬件资源上,以小于O(2n/2)的计算时问求解背包问题的无存储冲突并行算法。  相似文献   

20.
A new hybrid adaptive algorithm based on particle swarm optimization (PSO) is designed, developed and applied to the high school timetabling problem. The proposed PSO algorithm is used to create feasible and efficient timetables for high schools in Greece. Experiments with real-world data coming from different high schools have been conducted to show the efficiency of the proposed PSO algorithm. As well as that, the algorithm has been compared with four other effective techniques found in the literature to demonstrate its efficiency and superior performance. In order to have a fair comparison with these algorithms, we decided to use the exact same input instances used by these algorithms. The proposed PSO algorithm outperforms, in most cases, other existing attempts to solve the same problem as shown by experimental results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号