首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 421 毫秒
1.
We study the online preemptive scheduling of intervals and jobs (with restarts). Each interval or job has an arrival time, a deadline, a length and a weight. The objective is to maximize the total weight of completed intervals or jobs. While the deterministic case for intervals was settled a long time ago, the randomized case remains open. In this paper we first give a 2-competitive randomized algorithm for the case of equal length intervals. The algorithm is barely random in the sense that it randomly chooses between two deterministic algorithms at the beginning and then sticks with it thereafter. Then we extend the algorithm to cover several other cases of interval scheduling including monotone instances, C-benevolent instances and D-benevolent instances, giving the same competitive ratio. These algorithms are surprisingly simple but have the best competitive ratio against all previous (fully or barely) randomized algorithms. Next we extend the idea to give a 3-competitive algorithm for equal length jobs. Finally, we prove a lower bound of 2 on the competitive ratio of all barely random algorithms that choose between two deterministic algorithms for scheduling equal length intervals (and hence jobs). A preliminary version of this paper appeared in Fung et al. (The 6th International Workshop on Approximation and Online Algorithmspp, vol. 5426, pp. 53–66, 2008).  相似文献   

2.
We address the tradeoff between the competitive ratio and the resources used by randomized on-line algorithms for caching. Two algorithms reported in the literature that achieve the optimal ratio Hk require a lot of memory and perform extensive computation at each step. On the other hand, a very simple algorithm called RMARK has competitive ratio 2Hk−1, within a factor of 2 of the optimum. A natural question that arises here is whether there is a tradeoff between simplicity and the competitive ratio. In particular, is it possible to achieve a competitive ratio better than 2Hk−1 with a simple algorithm like RMARK?

We first consider marking algorithms that are natural generalizations of RMARK, and we prove that, for any >0, there is no randomized marking algorithm for caching with competitive ratio (2−)Hk. Thus RMARK is essentially optimal among marking algorithms.

Another model of simple caching algorithms is that of trackless algorithms. These are algorithms that do not store any information about items that are not in the cache. It is known that, for k=2, there is no randomized trackless algorithm for caching with ratio better than . The trivial upper bound is 2, achieved even by deterministic algorithms LRU and FIFO. We reduce this gap by giving a trackless randomized algorithm with competitive ratio .  相似文献   


3.
We consider the on-line version of the maximum vertex disjoint path problem when the underlying network is a tree. In this problem, a sequence of requests arrives in an on-line fashion, where every request is a path in the tree. The on-line algorithm may accept a request only if it does not share a vertex with a previously accepted request. The goal is to maximize the number of accepted requests. It is known that no on-line algorithm can have a competitive ratio better than Ω(log n) for this problem, even if the algorithm is randomized and the tree is simply a line. Obviously, it is desirable to beat the logarithmic lower bound. Adler and Azar (Proc. of the 10th ACM-SIAM Symposium on Discrete Algorithm, pp. 1–10, 1999) showed that if preemption is allowed (namely, previously accepted requests may be discarded, but once a request is discarded it can no longer be accepted), then there is a randomized on-line algorithm that achieves constant competitive ratio on the line. In the current work we present a randomized on-line algorithm with preemption that has constant competitive ratio on any tree. Our results carry over to the related problem of maximizing the number of accepted paths subject to a capacity constraint on vertices (in the disjoint path problem this capacity is 1). Moreover, if the available capacity is at least 4, randomization is not needed and our on-line algorithm becomes deterministic.  相似文献   

4.
We explore the revenue capabilities of truthful, monotone (“fair”) allocation and pricing functions for resource-constrained auction mechanisms within a general framework that encompasses unlimited supply auctions, knapsack auctions, and auctions with general non-decreasing convex production cost functions. We study and compare the revenue obtainable in each fair pricing scheme to the profit obtained by the ideal omniscient multi-price auction. We show that for capacitated knapsack auctions, no constant pricing scheme can achieve any approximation to the optimal profit, but proportional pricing is as powerful as general monotone pricing. In addition, for auction settings with arbitrary bounded non-decreasing convex production cost functions, we present a proportional pricing mechanism which achieves a poly-logarithmic approximation. Unlike existing approaches, all of our mechanisms have fair (monotone) prices, and all of our competitive analysis is with respect to the optimal profit extraction.  相似文献   

5.
The relative worst-order ratio, a relatively new measure for the quality of on-line algorithms, is extended and applied to the paging problem. We obtain results significantly different from those obtained with the competitive ratio. First, we devise a new deterministic paging algorithm, Retrospective-LRU, and show that, according to the relative worst-order ratio and in contrast with the competitive ratio, it performs better than LRU. Our experimental results, though not conclusive, are slightly positive and leave it possible that Retrospective-LRU or similar algorithms may be worth considering in practice. Furthermore, the relative worst-order ratio (and practice) indicates that LRU is better than the marking algorithm FWF, though all deterministic marking algorithms have the same competitive ratio. Look-ahead is also shown to be a significant advantage with this new measure, whereas the competitive ratio does not reflect that look-ahead can be helpful. Finally, with the relative worst-order ratio, as with the competitive ratio, no deterministic marking algorithm can be significantly better than LRU, but the randomized algorithm MARK is better than LRU.  相似文献   

6.
Blum  Chawla  Kalai 《Algorithmica》2003,36(3):249-260
Adaptive data structures form a central topic of on-line algorithms research. The area of Competitive Analysis began with the results of Sleator and Tarjan showing that splay trees achieve static optimality for search trees, and that Move-to-Front is constant competitive for the list update problem [ST1], [ST2]. In a parallel development, powerful algorithms have been developed in Machine Learning for problems of on-line prediction [LW], [FS]. This paper is inspired by the observation made in [BB] that if computational decision-making costs are not considered, then these ``weighted experts'' techniques from Machine Learning allow one to achieve a 1+ε ratio against the best static object in hindsight for a wide range of data structure problems. In this paper we give two results. First, we show that for the case of lists , we can achieve a 1+ε ratio with respect to the best static list in hindsight, by a simple efficient algorithm. This algorithm can then be combined with existing results to achieve good static and dynamic bounds simultaneously. Second, for trees, we show a (computationally in efficient) algorithm that achieves what we call ``dynamic search optimality'': dynamic optimality if we allow the on-line algorithm to make free rotations after each request. We hope this to be a step towards solving the longstanding open problem of achieving true dynamic optimality for trees.  相似文献   

7.
Tradeoffs between time complexities and solution optimalities are important when selecting algorithms for an NP-hard problem in different applications. Also, the distinction between theoretical upper bound and actual solution optimality for realistic instances of an NP-hard problem is a factor in selecting algorithms in practice. We consider the problem of partitioning a sequence of n distinct numbers into minimum number of monotone (increasing or decreasing) subsequences. This problem is NP-hard and the number of monotone subsequences can reach [√2n+1/1-1/2]in the worst case. We introduce a new algorithm, the modified version of the Yehuda-Fogel algorithm, that computes a solution of no more than [√2n+1/1-1/2]monotone subsequences in O(n^1.5) time. Then we perform a comparative experimental study on three algorithms, a known approximation algorithm of approximation ratio 1.71 and time complexity O(n^3), a known greedy algorithm of time complexity O(n^1.5 log n), and our new modified Yehuda-Fogel algorithm. Our results show that the solutions computed by the greedy algorithm and the modified Yehuda-Fogel algorithm are close to that computed by the approximation algorithm even though the theoretical worst-case error bounds of these two algorithms are not proved to be within a constant time of the optimal solution. Our study indicates that for practical use the greedy algorithm and the modified Yehuda-Fogel algorithm can be good choices if the running time is a major concern.  相似文献   

8.
A. Avidor  Y. Azar  J. Sgall 《Algorithmica》2001,29(3):422-441
We consider the on-line load balancing problem where there are m identical machines (servers) and a sequence of jobs. The jobs arrive one by one and should be assigned to one of the machines in an on-line fashion. The goal is to minimize the sum (over all machines) of the squares of the loads, instead of the traditional maximum load. We show that for the sum of the squares the greedy algorithm performs within 4/3 of the optimum, and no on-line algorithm achieves a better competitive ratio. Interestingly, we show that the performance of Greedy is not monotone in the number of machines. More specifically, the competitive ratio is 4/3 for any number of machines divisible by 3 but strictly less than 4/3 in all the other cases (although it approaches 4/3 for a large number of machines). To prove that Greedy is optimal, we show a lower bound of 4/3 for any algorithm for three machines. Surprisingly, we provide a new on-line algorithm that performs within 4/3 of the optimum, for some fixed δ>0 , for any sufficiently large number of machines. This implies that the asymptotic competitive ratio of our new algorithm is strictly better than the competitive ratio of any possible on-line algorithm. Such phenomena is not known to occur for the classic maximum load problem. Minimizing the sum of the squares is equivalent to minimizing the load vector with respect to the l 2 norm. We extend our techniques and analyze the exact competitive ratio of Greedy with respect to the l p norm. This ratio turns out to be 2 - Θ(( ln p)/p) . We show that Greedy is optimal for two machines but design an algorithm whose asymptotic competitive ratio is better than the ratio of Greedy. Received January 2, 1998; revised December 12, 1998.  相似文献   

9.
We present a systematic comparison of hybrid evolutionary algorithms (HEAs), which independently use six combinations of three crossover operators and two population updating strategies, for solving the single machine scheduling problem with sequence-dependent setup times. Experiments show the competitive performance of the combination of the linear order crossover operator and the similarity-and-quality based population updating strategy. Applying the selected HEA to solve 120 public benchmark instances of the single machine scheduling problem with sequence-dependent setup times to minimize the total weighted tardiness widely used in the literature, we achieve highly competitive results compared with the exact algorithm and other state-of-the-art metaheuristic algorithms in the literature. Meanwhile, we apply the selected HEA in its original form to deal with the unweighted 64 public benchmark instances. Our HEA is able to improve the previous best known results for one instance and match the optimal or the best known results for the remaining 63 instances in a reasonable time.  相似文献   

10.
Cohen et al. [5] recently initiated the theoretical study of connection caching in the world-wide web. They extensively studied uniform connection caching, where the establishment cost is uniform for all connections [5], [6]. They showed that ordinary paging algorithms can be used to derive algorithms for uniform connection caching and analyzed various algorithms such as Belady's rule, LRU and Marking strategies. In particular, in [5] Cohen et al. showed that LRU yields a (2k-1) -competitive algorithm, where k is the size of the largest cache in the network. In [6] they investigated Marking algorithms with different types of communication among nodes and presented deterministic k -competitive algorithms. In this paper we study generalized connection caching , also introduced in [5], where connections can incur varying establishment costs. This model is reasonable because the cost of establishing a connection depends, for instance, on the distance of the nodes to be connected and on the congestion in the network. Algorithms for ordinary weighted caching can be used to derive algorithms for generalized connection caching. We present tight or nearly tight analyses on the performance achieved by the currently known weighted caching algorithms when applied in generalized connection caching. In particular we give online algorithms that achieve an optimal competitive ratio of k . Our deterministic algorithm uses extra communication while maintaining open connections. We develop a generalized algorithm that trades communication for performance and achieves a competitive ratio of (1+ε)k , for any 0<ε≤ 1 , using at most
bits of communication on each open link. Additionally we consider two extensions of generalized connection caching where (1) connections have time-out values, or (2) the establishment cost of connections is asymmetric. We show that the performance ratio of our algorithms can be preserved in scenario (1). In the case of (2) we derive nearly tight upper and lower bounds on the best possible competitiveness.  相似文献   

11.
We study the matroid secretary problems with submodular valuation functions. In these problems, the elements arrive in random order. When one element arrives, we have to make an immediate and irrevocable decision on whether to accept it or not. The set of accepted elements must form an independent set in a predefined matroid. Our objective is to maximize the value of the accepted elements. In this paper, we focus on the case that the valuation function is a non-negative and monotonically non-decreasing submodular function. We introduce a general algorithm for such submodular matroid secretary problems. In particular, we obtain constant competitive algorithms for the cases of laminar matroids and transversal matroids. Our algorithms can be further applied to any independent set system defined by the intersection of a constant number of laminar matroids, while still achieving constant competitive ratios. Notice that laminar matroids generalize uniform matroids and partition matroids. On the other hand, when the underlying valuation function is linear, our algorithm achieves a competitive ratio of 9.6 for laminar matroids, which significantly improves the previous result.  相似文献   

12.
13.
傅汤毅 《计算机应用研究》2021,38(12):3678-3682
有约束竞争选址问题是组合优化中一个经典的NP-hard问题,现有算法研究该问题时或是无法求得最优解或是求解速度慢.针对现有算法的缺点,首先在这个经典问题的基础上进行修改,构建了一个新的数学模型;接着对该模型的数学性质进行研究,并在数学性质的基础上提出了上下界算法和降阶子算法对问题进行降阶,达到了缩减问题搜索解空间的目的,降阶的过程中既有单个的降阶,也有成批的降阶;然后在前面的基础上设计了一个回溯子算法来求解问题的最优解;最后通过两个示例分析更清楚地阐述该算法的原理,结果证明该算法可以较快求得最优解.  相似文献   

14.
In the graph exploration problem, a searcher explores the whole set of nodes of an unknown graph. We assume that all the unknown graphs are undirected and connected. The searcher is not aware of the existence of an edge until he/she visits one of its endpoints. The searcher's task is to visit all the nodes and go back to the starting node by traveling a tour as short as possible. One of the simplest strategies is the nearest neighbor algorithm (NN), which always chooses the unvisited node nearest to the searcher's current position. The weighted NN (WNN) is an extension of NN, which chooses the next node to visit by using the weighted distance. It is known that WNN with weight 3 is 16-competitive for planar graphs. In this paper we prove that NN achieves the competitive ratio of 1.5 for cycles. In addition, we show that the analysis for the competitive ratio of NN is tight by providing an instance for which the bound of 1.5 is attained, and NN is the best for cycles among WNN with all possible weights. Furthermore, we prove that no online algorithm to explore cycles is better than 1.25-competitive.  相似文献   

15.
This paper addresses the multiobjective vehicle routing problem with time windows (MOVRPTW). The objectives are to minimize the number of vehicles and the total distance simultaneously. Our approach is based on an evolutionary algorithm and aims to find the set of Pareto optimal solutions. We incorporate problem-specific knowledge into the genetic operators. The crossover operator exchanges one of the best routes, which has the shortest average distance, the relocation mutation operator relocates a large number of customers in non-decreasing order of the length of the time window, and the split mutation operator breaks the longest-distance link in the routes. Our algorithm is compared with 10 existing algorithms by standard 100-customer and 200-customer problem instances. It shows competitive performance and updates more than 1/3 of the net set of the non-dominated solutions.  相似文献   

16.
Timetabling is the problem of scheduling a set of events while satisfying various constraints. In this paper, we develop and study the performance of an evolutionary algorithm, designed to solve a specific variant of the timetabling problem. Our aim here is twofold: to develop a competitive algorithm, but more importantly, to investigate the applicability of evolutionary operators to timetabling. To this end, the introduced algorithm is tested using a benchmark set. Comparison with other algorithms shows that it achieves better results in some, but not all instances, signifying strong and weak points. To further the study, more comprehensive tests are performed in connection with another evolutionary algorithm that uses strictly group-based operators. Our analysis of the empirical results leads us to question single-level selection, proposing, in its place, a multi-level alternative.  相似文献   

17.
We consider randomized algorithms for on-line scheduling on identical machines. For two machines, a randomized algorithm achieving a competitive ratio of was found by Bartal et al. (1995). Seiden has presented a randomized algorithm which achieves competitive ratios of 1.55665, 1.65888, 1.73376, 1.78295, and 1.81681, for m=3,4,5,6,7, respectively (Seiden, 2000). A barely random algorithm is one which is a distribution over a constant number of deterministic strategies. The algorithms of Bartal et al. and Seiden are not barely random–in fact, these algorithms potentially make a random choice for each job scheduled. We present the first barely random on-line scheduling algorithms. In addition, our algorithms use less space and time than the previous algorithms, asymptotically.  相似文献   

18.
Blum  Chawla  Kalai 《Algorithmica》2008,36(3):249-260
Abstract. Adaptive data structures form a central topic of on-line algorithms research. The area of Competitive Analysis began with the results of Sleator and Tarjan showing that splay trees achieve static optimality for search trees, and that Move-to-Front is constant competitive for the list update problem [ST1], [ST2]. In a parallel development, powerful algorithms have been developed in Machine Learning for problems of on-line prediction [LW], [FS]. This paper is inspired by the observation made in [BB] that if computational decision-making costs are not considered, then these ``weighted experts' techniques from Machine Learning allow one to achieve a 1+ε ratio against the best static object in hindsight for a wide range of data structure problems. In this paper we give two results. First, we show that for the case of lists , we can achieve a 1+ε ratio with respect to the best static list in hindsight, by a simple efficient algorithm. This algorithm can then be combined with existing results to achieve good static and dynamic bounds simultaneously. Second, for trees, we show a (computationally in efficient) algorithm that achieves what we call ``dynamic search optimality': dynamic optimality if we allow the on-line algorithm to make free rotations after each request. We hope this to be a step towards solving the longstanding open problem of achieving true dynamic optimality for trees.  相似文献   

19.
随着大规模定制的市场需求日趋显著,赛如生产系统(Seru production system,SPS)应运而生,逐渐成为研究和应用领域的热点.本文针对带有资源冲突的Seru在线并行调度问题进行研究,即需要在有限的空间位置上安排随动态需求而构建的若干Seru,以总加权完工时间最小为目标,决策Seru的构建顺序及时间.先基于平均延迟最短加权处理时间(Average delayed shortest weighted processing time,AD-SWPT)算法,针对其竞争比不为常数的局限性,引入调节参数,得到竞争比为常数的无资源冲突的Seru在线并行调度算法.接下来,引入冲突处理机制,得到有资源冲突的Seru在线并行调度算法,αAD-I(α-average delayed shortest weighted processing time-improved)算法,特殊实例下可通过实例归约的方法证明其竞争比与无资源冲突的情况相同.最后,通过实验,验证了在波动的市场环境下算法对于特殊实例与一般实例的优越性.  相似文献   

20.
随着大规模定制的市场需求日趋显著,赛如生产系统(Seru production system,SPS)应运而生,逐渐成为研究和应用领域的热点.本文针对带有资源冲突的Seru在线并行调度问题进行研究,即需要在有限的空间位置上安排随动态需求而构建的若干Seru,以总加权完工时间最小为目标,决策Seru的构建顺序及时间.先基于平均延迟最短加权处理时间(Average delayed shortest weighted processing time,AD-SWPT)算法,针对其竞争比不为常数的局限性,引入调节参数,得到竞争比为常数的无资源冲突的Seru在线并行调度算法.接下来,引入冲突处理机制,得到有资源冲突的Seru在线并行调度算法,αAD-I(α-average delayed shortest weighted processing time-improved)算法,特殊实例下可通过实例归约的方法证明其竞争比与无资源冲突的情况相同.最后,通过实验,验证了在波动的市场环境下算法对于特殊实例与一般实例的优越性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号