首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We analyze approximation algorithms for several variants of the traveling salesman problem with multiple objective functions. First, we consider the symmetric TSP (STSP) with γ-triangle inequality. For this problem, we present a deterministic polynomial-time algorithm that achieves an approximation ratio of and a randomized approximation algorithm that achieves a ratio of . In particular, we obtain a 2+ε approximation for multi-criteria metric STSP. Then we show that multi-criteria cycle cover problems admit fully polynomial-time randomized approximation schemes. Based on these schemes, we present randomized approximation algorithms for STSP with γ-triangle inequality (ratio ), asymmetric TSP (ATSP) with γ-triangle inequality (ratio ), STSP with weights one and two (ratio 4/3) and ATSP with weights one and two (ratio 3/2). A preliminary version of this work has been presented at the 4th Workshop on Approximation and Online Algorithms (WAOA 2006) (Lecture Notes in Computer Science, vol. 4368, pp. 302–315, 2007). B. Manthey is supported by the Postdoc-Program of the German Academic Exchange Service (DAAD). He is on leave from Saarland University and has done part of the work at the Institute for Theoretical Computer Science of the University of Lübeck supported by DFG research grant RE 672/3 and at the Department of Computer Science at Saarland University.  相似文献   

2.
Chvátal-Gomory cuts are among the most well-known classes of cutting planes for general integer linear programs (ILPs). In case the constraint multipliers are either 0 or , such cuts are known as -cuts. It has been proven by Caprara and Fischetti (Math. Program. 74:221–235, 1996) that separation of -cuts is -hard. In this paper, we study ways to separate -cuts effectively in practice. We propose a range of preprocessing rules to reduce the size of the separation problem. The core of the preprocessing builds a Gaussian elimination-like procedure. To separate the most violated -cut, we formulate the (reduced) problem as integer linear program. Some simple heuristic separation routines complete the algorithmic framework. Computational experiments on benchmark instances show that the combination of preprocessing with exact and/or heuristic separation is a very vital idea to generate strong generic cutting planes for integer linear programs and to reduce the overall computation times of state-of-the-art ILP-solvers.  相似文献   

3.
There has been much recent interest in the use of the earliest-deadline-first ( ) algorithm for scheduling soft real-time sporadic task systems on identical multiprocessors. In hard real-time systems, a significant disparity exists between -based schemes and Pfair scheduling: on M processors, the worst-case schedulable utilization for all known variants is approximately M/2, whereas it is M for optimal Pfair algorithms. This is unfortunate because -based algorithms entail lower scheduling and task-migration overheads. However, such a disparity in schedulability can be alleviated by easing the requirement that all deadlines be met, which may be sufficient for soft real-time systems. In particular, in recent work, we have shown that if task migrations are not restricted, then (i.e. , global ) can ensure bounded tardiness for a sporadic task system with no restrictions on total utilization. Unrestricted task migrations in global may be unappealing for some systems, but if migrations are forbidden entirely, then bounded tardiness cannot be guaranteed. In this paper, we address the issue of striking a balance between task migrations and system utilization by proposing an algorithm called , which is based upon and treads a middle path, by restricting, but not eliminating, task migrations. Specifically, under , the ability to migrate is required for at most M−1 tasks, and it is sufficient that every such task migrate between two processors and at job boundaries only. , like global , can ensure bounded tardiness to a sporadic task system as long as the available processing capacity is not exceeded, but, unlike global , may require that per-task utilizations be capped. The required cap is quite liberal, hence, should enable a wide range of soft real-time applications to be scheduled with no constraints on total utilization.
UmaMaheswari C. DeviEmail:
  相似文献   

4.
The earliest-pseudo-deadline-first ( ) Pfair scheduling algorithm is less expensive than some other known Pfair algorithms, but is not optimal for scheduling recurrent real-time tasks on more than two processors. In prior work, sufficient per-task weight (i.e., utilization) restrictions were established for ensuring that tasks either do not miss their deadlines or have bounded tardiness when scheduled under . Implicit in these restrictions is the assumption that the total system utilization may equal the total available processing capacity (i.e., the total number of processors). This paper considers an orthogonal issue, namely, determining a sufficient restriction on the total utilization of a task set for it to be schedulable (i.e., a schedulable utilization bound) under , assuming that there are no per-task weight restrictions. We prove that a task set with total utilization at most is correctly scheduled under on M processors, regardless of how large each task’s weight is. At present, we do not know whether this value represents the worst-case for , but we do provide a counterexample that shows that it cannot be improved to exceed 86% of the total processing capacity. The schedulable utilization bound we derive is expressed in terms of the maximum weight of any task, and hence, if this value is known, may be used to schedule task sets with total utilization greater than .
UmaMaheswari C. DeviEmail:
  相似文献   

5.
Radio networks model wireless data communication when the bandwidth is limited to one wave frequency. The key restriction of such networks is mutual interference of packets arriving simultaneously at a node. The many-to-many (m2m) communication primitive involves p participant nodes from among n nodes in the network, where the distance between any pair of participants is at most d. The task is to have all the participants get to know all the input messages. We consider three cases of the m2m communication problem. In the ad-hoc case, each participant knows only its name and the values of n, p and d. In the partially centralized case, each participant knows the topology of the network and the values of p and d, but does not know the names of the other participants. In the centralized case, each participant knows the topology of the network and the names of all the participants. For the centralized m2m problem, we give deterministic protocols, for both undirected and directed networks, working in time, which is provably optimal. For the partially centralized m2m problem, we give a randomized protocol for undirected networks working in time with high probability (whp), and we show that any deterministic protocol requires time. For the ad-hoc m2m problem, we develop a randomized protocol for undirected networks that works in time whp. We show two lower bounds for the ad-hoc m2m problem. One lower bound states that any randomized protocol for the m2m ad hoc problem requires expected time. Another lower bound states that for any deterministic protocol for the m2m ad hoc problem, there is a network on which the protocol requires time when np(n)=Ω(n) and d>1, and that it requires Ω(n) time when np(n)=o(n). The results of this paper appeared in a preliminary form in “On many-to-many communication in packet radio networks” in Proceedings of the 10th Conference on Principles of Distributed Systems (OPODIS), Bordeaux, France, 2006, Lecture Notes in Computer Science 4305, Springer, Heidelberg, pp. 258–272. The work of B.S. Chlebus was supported by NSF Grant 0310503.  相似文献   

6.
We obtain subquadratic algorithms for 3SUM on integers and rationals in several models. On a standard word RAM with w-bit words, we obtain a running time of . In the circuit RAM with one nonstandard AC 0 operation, we obtain . In external memory, we achieve O(n 2/(MB)), even under the standard assumption of data indivisibility. Cache-obliviously, we obtain a running time of . In all cases, our speedup is almost quadratic in the “parallelism” the model can afford, which may be the best possible. Our algorithms are Las Vegas randomized; time bounds hold in expectation, and in most cases, with high probability.  相似文献   

7.
We present two algorithms that are near optimal with respect to the number of inversions present in the input. One of the algorithms is a variation of insertion sort, and the other is a variation of merge sort. The number of comparisons performed by our algorithms, on an input sequence of length n that has I inversions, is at most . Moreover, both algorithms have implementations that run in time . All previously published algorithms require at least comparisons for some c > 1. M. L. Fredman was supported in part by NSF grant CCR-9732689.  相似文献   

8.
We consider a semi on-line version of the multiprocessor scheduling problem on three processors, where the total size of the tasks is known in advance. We prove a lower bound on the competitive ratio of any algorithm and propose a simple algorithm with competitive ratio equal to 1.5. The performance is improved to by a preprocessing strategy. The latter algorithm is only 2% away from the lower bound. Z. Tuza is supported in part by the Hungarian Scientific Research Fund, OTKA grant T-049613.  相似文献   

9.
We consider the problem of finding a stable matching of maximum size when both ties and unacceptable partners are allowed in preference lists. This problem is NP-hard and the current best known approximation algorithm achieves the approximation ratio 2−c(log N)/N, where c is an arbitrary positive constant and N is the number of men in an input. In this paper, we improve the ratio to , where c is an arbitrary constant that satisfies . A preliminary version of this paper was presented at the 16th Annual International Symposium on Algorithms and Computation, ISAAC 2005.  相似文献   

10.
We consider the following problem of scheduling with conflicts (swc): Find a minimum makespan schedule on identical machines where conflicting jobs cannot be scheduled concurrently. We study the problem when conflicts between jobs are modeled by general graphs. Our first main positive result is an exact algorithm for two machines and job sizes in {1,2}. For jobs sizes in {1,2,3}, we can obtain a -approximation, which improves on the -approximation that was previously known for this case. Our main negative result is that for jobs sizes in {1,2,3,4}, the problem is APX-hard. Our second contribution is the initiation of the study of an online model for swc, where we present the first results in this model. Specifically, we prove a lower bound of on the competitive ratio of any deterministic online algorithm for m machines and unit jobs, and an upper bound of 2 when the algorithm is not restricted computationally. For three machines we can show that an efficient greedy algorithm achieves this bound. For two machines we present a more complex algorithm that achieves a competitive ratio of when the number of jobs is known in advance to the algorithm.  相似文献   

11.
We use Schnyder woods of 3-connected planar graphs to produce convex straight-line drawings on a grid of size The parameter depends on the Schnyder wood used for the drawing. This parameter is in the range The algorithm is a refinement of the face-counting algorithm; thus, in particular, the size of the grid is at most The above bound on the grid size simultaneously matches or improves all previously known bounds for convex drawings, in particular Schnyder's and the recent Zhang and He bound for triangulations and the Chrobak and Kant bound for 3-connected planar graphs. The algorithm takes linear time. The drawing algorithm has been implemented and tested. The expected grid size for the drawing of a random triangulation is close to For a random 3-connected plane graph, tests show that the expected size of the drawing is   相似文献   

12.
We study an online job scheduling problem arising in networks with aggregated links. The goal is to schedule n jobs, divided into k disjoint chains, on m identical machines, without preemption, so that the jobs within each chain complete in the order of release times and the maximum flow time is minimized. We present a deterministic online algorithm with competitive ratio , and show a matching lower bound, even for randomized algorithms. The performance bound for we derive in the paper is, in fact, more subtle than a standard competitive ratio bound, and it shows that in overload conditions (when many jobs are released in a short amount of time), ’s performance is close to the optimum. We also show how to compute an offline solution efficiently for k=1, and that minimizing the maximum flow time for k,m≥2 is -hard. As by-products of our method, we obtain two offline polynomial-time algorithms for minimizing makespan: an optimal algorithm for k=1, and a 2-approximation algorithm for any k. W. Jawor and M. Chrobak supported by NSF grants OISE-0340752 and CCR-0208856. Work of C. Dürr conducted while being affiliated with the Laboratoire de Recherche en Informatique, Université Paris-Sud, 91405 Orsay. Supported by the CNRS/NSF grant 17171 and ANR Alpage.  相似文献   

13.
We consider the problem of approximately integrating a Lipschitz function f (with a known Lipschitz constant) over an interval. The goal is to achieve an additive error of at most ε using as few samples of f as possible. We use the adaptive framework: on all problem instances an adaptive algorithm should perform almost as well as the best possible algorithm tuned for the particular problem instance. We distinguish between and , the performances of the best possible deterministic and randomized algorithms, respectively. We give a deterministic algorithm that uses samples and show that an asymptotically better algorithm is impossible. However, any deterministic algorithm requires samples on some problem instance. By combining a deterministic adaptive algorithm and Monte Carlo sampling with variance reduction, we give an algorithm that uses at most samples. We also show that any algorithm requires samples in expectation on some problem instance (f,ε), which proves that our algorithm is optimal.  相似文献   

14.
In statistical analysis of measurement results it is often necessary to compute the range of the population variance when we only know the intervals of possible values of the x i . While can be computed efficiently, the problem of computing is, in general, NP-hard. In our previous paper “Population Variance under Interval Uncertainty: A New Algorithm” (Reliable Computing 12 (4) (2006), pp. 273–280) we showed that in a practically important case we can use constraints techniques to compute in time O(n · log(n)). In this paper we provide new algorithms that compute (in all cases) and (for the above case) in linear time O(n). Similar linear-time algorithms are described for computing the range of the entropy when we only know the intervals of possible values of probabilities p i . In general, a statistical characteristic ƒ can be more complex so that even computing ƒ can take much longer than linear time. For such ƒ, the question is how to compute the range in as few calls to ƒ as possible. We show that for convex symmetric functions ƒ, we can compute in n calls to ƒ.  相似文献   

15.
The -automaton is the weakest form of the nondeterministic version of the restarting automaton that was introduced by Jančar et al. to model the so-called analysis by reduction. Here it is shown that the class ℒ(R) of languages that are accepted by -automata is incomparable under set inclusion to the class of Church-Rosser languages and to the class of growing context-sensitive languages. In fact this already holds for the class of languages that are accepted by 2-monotone -automata. In addition, we prove that already the latter class contains -complete languages, showing that already the 2-monotone -automaton has a surprisingly large expressive power. The results of this paper have been announced at DLT 2004 in Auckland, New Zealand. This work was mainly carried out while T. Jurdziński was visiting the University of Kassel, supported by a grant from the Deutsche Forschungsgemeinschaft (DFG). F. Mráz and M. Plátek were partially supported by the Grant Agency of the Czech Republic under Grant-No. 201/04/2102 and by the program ‘Information Society’ under project 1ET100300517. F. Mráz was also supported by the Grant Agency of Charles University in Prague under Grant-No. 358/2006/A-INF/MFF.  相似文献   

16.
We consider the problems of enumerating all minimal strongly connected subgraphs and all minimal dicuts of a given strongly connected directed graph G=(V,E). We show that the first of these problems can be solved in incremental polynomial time, while the second problem is NP-hard: given a collection of minimal dicuts for G, it is NP-hard to tell whether it can be extended. The latter result implies, in particular, that for a given set of points , it is NP-hard to generate all maximal subsets of contained in a closed half-space through the origin. We also discuss the enumeration of all minimal subsets of whose convex hull contains the origin as an interior point, and show that this problem includes as a special case the well-known hypergraph transversal problem. This research was supported by the National Science Foundation (Grant IIS-0118635). The third and fourth authors are also grateful for the partial support by DIMACS, the National Science Foundation’s Center for Discrete Mathematics and Theoretical Computer Science. Our friend and co-author, Leonid Khachiyan tragically passed away on April 29, 2005.  相似文献   

17.
In statistical analysis of measurement results, it is often beneficial to compute the range V of the population variance when we only know the intervals of possible values of the xi. In general, this problem is NP-hard; a polynomialtime algorithm is known for the case when the measurements are sufficiently accurate, i.e., when for all In this paper, we show that we can efficiently compute V under a weaker (and more general) condition .  相似文献   

18.
It is proved that “FIFO” worksharing protocols provide asymptotically optimal solutions to two problems related to sharing large collections of independent tasks in a heterogeneous network of workstations (HNOW) . In the , one seeks to accomplish as much work as possible on during a prespecified fixed period of L time units. In the , one seeks to complete W units of work by “renting” for as short a time as necessary. The worksharing protocols we study are crafted within an architectural model that characterizes via parameters that measure ’s workstations’ computational and communicational powers. All valid protocols are self-scheduling, in the sense that they determine completely both an amount of work to allocate to each of ’s workstations and a schedule for all related interworkstation communications. The schedules provide either a value for W given L, or a value for L given W, hence solve both of the motivating problems. A protocol observes a FIFO regimen if it has ’s workstations finish their assigned work, and return their results, in the same order in which they are supplied with their workloads. The proven optimality of FIFO protocols resides in the fact that they accomplish at least as much work as any other protocol during all sufficiently long worksharing episodes, and that they complete sufficiently large given collections of tasks at least as fast as any other protocol. Simulation experiments illustrate that the superiority of FIFO protocols is often observed during worksharing episodes of only a few minutes’ duration. A portion of this research was presented at the 15th ACM Symp. on Parallelism in Algorithms and Architectures (2003).  相似文献   

19.
Variable transformations for numerical integration have been used for improving the accuracy of the trapezoidal rule. Specifically, one first transforms the integral via a variable transformation that maps [0,1] to itself, and then approximates the resulting transformed integral by the trapezoidal rule. In this work, we propose a new class of symmetric and nonsymmetric variable transformations which we denote , where r and s are positive scalars assigned by the user. A simple representative of this class is . We show that, in case , or but has algebraic (endpoint) singularities at x = 0 and/or x = 1, the trapezoidal rule on the transformed integral produces exceptionally high accuracies for special values of r and s. In particular, when and we employ , the error in the approximation is (i) O(h r ) for arbitrary r and (ii) O(h 2r ) if r is a positive odd integer at least 3, h being the integration step. We illustrate the use of these transformations and the accompanying theory with numerical examples.   相似文献   

20.
Disjoint -pairs are a well studied complexity-theoretic concept with important applications in cryptography and propositional proof complexity. In this paper we introduce a natural generalization of the notion of disjoint -pairs to disjoint k-tuples of -sets for k≥2. We define subclasses of the class of all disjoint k-tuples of -sets. These subclasses are associated with a propositional proof system and possess complete tuples which are defined from the proof system. In our main result we show that complete disjoint -pairs exist if and only if complete disjoint k-tuples of -sets exist for all k≥2. Further, this is equivalent to the existence of a propositional proof system in which the disjointness of all k-tuples is shortly provable. We also show that a strengthening of this conditions characterizes the existence of optimal proof systems. An extended abstract of this paper appeared in the proceedings of the conference CSR 2006 (Lecture Notes in Computer Science 3967, 80–91, 2006). Supported by DFG grant KO 1053/5-1.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号