首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
The cover polynomial and its geometric version introduced by Chung & Graham and D??Antona & Munarini, respectively, are two-variate graph polynomials for directed graphs. They count the (weighted) number of ways to cover a graph with disjoint directed cycles and paths, can be thought of as interpolations between determinant and permanent, and are proposed as directed analogues of the Tutte polynomial. Jaeger, Vertigan, and Welsh showed that the Tutte polynomial is #P-hard to evaluate at all but a few special points and curves. It turns out that the same holds for the cover polynomials: We prove that, in almost the whole plane, the problem of evaluating the cover polynomial and its geometric version is #P-hard under polynomial time Turing reductions, while only three points in the cover polynomial and two points in the geometric cover polynomial are easy. We also study the complexity of approximately evaluating the geometric cover polynomial. Under the reasonable complexity assumptions RP ?? NP and RFP ?? #P, we give a succinct characterization of a large class of points at which approximating the geometric cover polynomial within any polynomial factor is not possible.  相似文献   

2.
We study a scheduling problem with rejection on a set of two machines in a flow-shop scheduling system. We evaluate the quality of a solution by two criteria: the first is the makespan and the second is the total rejection cost. We show that the problem of minimizing the makespan plus total rejection cost is NP-hard and for its solution we provide two different approximation algorithms, a pseudo-polynomial time optimization algorithm and a fully polynomial time approximation scheme (FPTAS). We also study the problem of finding the entire set of Pareto-optimal points (this problem is NP-hard due to the NP-hardness of the same problem variation on a single machine [20]). We show that this problem can be solved in pseudo-polynomial time. Moreover, we show how we can provide an FPTAS that, given that there exists a Pareto optimal schedule with a total rejection cost of at most R and a makespan of at most K, finds a solution with a total rejection cost of at most (1+?)R and a makespan value of at most (1+?)K. This is done by defining a set of auxiliary problems and providing an FPTAS algorithm to each one of them.  相似文献   

3.
4.
The existence of an optimal propositional proof system is a major open question in proof complexity; many people conjecture that such systems do not exist. Krají?ek and Pudlák (J.?Symbol. Logic 54(3):1063, 1989) show that this question is equivalent to the existence of an algorithm that is optimal on all propositional tautologies. Monroe (Theor. Comput. Sci. 412(4?C5):478, 2011) recently presented a conjecture implying that such an algorithm does not exist. We show that if one allows errors, then such optimal algorithms do exist. The concept is motivated by the notion of heuristic algorithms. Namely, we allow an algorithm, called a heuristic acceptor, to claim a small number of false ??theorems?? and err with bounded probability on other inputs. The amount of false ??theorems?? is measured according to a polynomial-time samplable distribution on non-tautologies. Our result remains valid for all recursively enumerable languages and can also be viewed as the existence of an optimal weakly automatizable heuristic proof system. The notion of a heuristic acceptor extends the notion of a classical acceptor; in particular, an optimal heuristic acceptor for any distribution simulates every classical acceptor for the same language. We also note that the existence of a co-NP-language L with a polynomial-time samplable distribution on $\overline{L}$ that has no polynomial-time heuristic acceptors is equivalent to the existence of an infinitely-often one-way function.  相似文献   

5.
Landau  Immerman 《Algorithmica》2002,32(3):423-436
This paper answers the following question: Given an ``erector set'' linkage, a connected set of fixed-length links, what is the minimal ?needed to adjust the edge lengths so that the vertices of the linkage can be placed on integer lattice points? Each edge length is allowed to change by at most ? . Angles are not fixed, but collinearity must be preserved (although the introduction of new collinearities is allowed). We show that the question of determining whether a linkage can be embedded on the integer lattice is strongly NP -complete. Indeed, we show that even with ? = 0(under which the problem becomes ``Can this linkage be embedded?''), the problem remains strongly NP -complete. However, for some applications, it is reasonable to assume that lengths of the links and the number of ``co-incident'' cycles are bounded (two cycles are co-incident if they share an edge). We show that under these bounding assumptions, there is a polynomial-time solution to the problem.  相似文献   

6.
We study FP|| NP , the class of functions that can be computed in polynomial time with nonadaptive queries to an NP oracle. This is motivated by the question of whether it is possible to compute witnesses for NP sets within FP|| NP . The known algorithms for this task all require sequential access to the oracle. On the other hand, there is no evidence known yet that this should not be possible with parallel queries. We define a class of optimization problems based on NP sets, where the optimum is taken over a polynomially bounded range (NPbOpt). We show that if such an optimization problem is based on one of the known NP-complete sets, then it is hard for FP|| NP . Moreover, we characterize FP|| NP as the class of functions that reduces to such optimization functions. We call this property strong hardness. The main question is whether these function classes are complete for FP|| NP . That is, whether it is possible to compute an optimal value for a given optimization problem in FP|| NP . We show that these optimization problems are complete for FP|| NP , if and only if one can compute membership proofs for NP sets in FP|| NP . This indicates that the completeness question is a hard one. Received October 1995, and in final form March 1997.  相似文献   

7.
It is widely believed that showing a problem to be NP-complete is tantamount to proving its computational intractability. In this paper we show that a number of NP-complete problems remain NP-complete even when their domains are substantially restricted. First we show the completeness of Simple Max Cut (Max Cut with edge weights restricted to value 1), and, as a corollary, the completeness of the Optimal Linear Arrangement problem. We then show that even if the domains of the Node Cover and Directed Hamiltonian Path problems are restricted to planar graphs, the two problems remain NP-complete, and that these and other graph problems remain NP-complete even when their domains are restricted to graphs with low node degrees. For Graph 3-Colorability, Node Cover, and Undirected Hamiltonian Circuit, we determine essentially the lowest possible upper bounds on node degree for which the problems remain NP-complete.  相似文献   

8.
We consider a variant of the well-known minimum cost flow problem where the flow on each arc in the network is restricted to be either zero or above a given lower bound. The problem was recently shown to be weakly NP-complete even on series-parallel graphs. We start by showing that the problem is strongly NP-complete and cannot be approximated in polynomial time (unless P=NP) up to any polynomially computable function even when the graph is bipartite and the given instance is guaranteed to admit a feasible solution. Moreover, we present a pseudo-polynomial-time exact algorithm and a fully polynomial-time approximation scheme (FPTAS) for the problem on series-parallel graphs.  相似文献   

9.
It is shown that an algorithm polynomial on average with respect to μ and that determines an optimal solution to a set cover problem that differs from the initial problem in one position of the constraint matrix does not exist if the optimal solution of the original problem is known and DistNP is not a subset of Average-P. A similar result takes place for the knapsack problem.  相似文献   

10.
11.
This work addresses the problem of finding the maximum number of unweighted vertex-disjoint triangles in an undirected graph G. It is a challenging NP-hard combinatorial problem and it is well-known to be APX-hard. A branch-and-bound algorithm which uses a lower bound based on neighborhood degree is presented. A naive upper bound is proposed as well as another one based on a surrogate relaxation of the related integer linear program which is analogous to a multidimensional knapsack problem. Further, a Greedy Search algorithm and a genetic algorithm are described to improve the lower bound. A computational comparison of lower bounds, branch-and-bound algorithm and CPLEX solver is provided using randomly generated benchmarks and well-known DIMACS implementation challenges. The empirical study shows that the branch-and-bound finds the optimal triangle packing solution for small randomly generated MTP instances (up to 100 vertices and 200 triangles) and some DIMACS graphs. For some larger instances and DIMACS challenges graphs, we remark that our lower bound outperforms CPLEX solver regarding the triangle packing solution and the computation time.  相似文献   

12.
Most Relevant Explanation (MRE) is the problem of finding a partial instantiation of a set of target variables that maximizes the generalized Bayes factor as the explanation for given evidence in a Bayesian network. MRE has a huge solution space and is extremely difficult to solve in large Bayesian networks. In this paper, we first prove that MRE is at least NP-hard. We then define a subproblem of MRE called MRE k that finds the most relevant k-ary explanation and prove that the decision problem of MRE k is NPPPNP^{\it PP}-complete. Since MRE needs to find the best solution by MRE k over all k, and we can also show that MRE is in NPPPNP^{\it PP}, we conjecture that a decision problem of MRE is NPPPNP^{\it PP}-complete as well. Furthermore, we show that MRE remains in NPPPNP^{\it PP} even if we restrict the number of target variables to be within a log factor of the number of all unobserved variables. These complexity results prompt us to develop a suite of approximation algorithms for solving MRE, One algorithm finds an MRE solution by integrating reversible-jump MCMC and simulated annealing in simulating a non-homogeneous Markov chain that eventually concentrates its mass on the mode of a distribution of the GBF scores of all solutions. The other algorithms are all instances of local search methods, including forward search, backward search, and tabu search. We tested these algorithms on a set of benchmark diagnostic Bayesian networks. Our empirical results show that these methods could find optimal MRE solutions for most of the test cases in our experiments efficiently.  相似文献   

13.
In this paper we investigate the k-path cover problem for graphs, which is to find the minimum number of vertex disjoint k-paths that cover all the vertices of a graph. The k-path cover problem for general graphs is NP-complete. Though notable applications of this problem to database design, network, VLSI design, ring protocols, and code optimization, efficient algorithms are known for only few special classes of graphs. In order to solve this problem for cacti, i.e., graphs where no edge lies on more than one cycle, we introduce the so-called Steiner version of the k-path cover problem, and develop an efficient algorithm for the Steiner k-path cover problem for cacti, which finds an optimal k-path cover for a given cactus in polynomial time.  相似文献   

14.
We introduce a class of layered graphs which we call (k,2)-partite and which we argue are an interesting class because of several important applications. We show that testing for (k,2)-partiteness can be done efficiently both on sequential and parallel machines, by showing that membership is in NSPACE(log n) and in NC2. We show that (k,2)-partite graphs have bounded path width. We then show that a particular NP-complete problem, namely Maximum Independent Set, is solvable in linear time on bounded pathwidth graphs if the path decomposition is included in the input. Finally, we show that the Maximum Independent Set problem is in NC2 for (k,2)-partite graphs. We note that linear time solutions for certain NP-complete problems have been shown for a wider class of graphs, namely partial k-trees. Our linear time algorithm is somewhat simpler in structure. We conjecture that our techniques can be used on many NP-complete problems to yield efficient algorithms for (k,2)-partite graphs.  相似文献   

15.
In this paper we separate many-one reducibility from truth-table reducibility for distributional problems in DistNP under the hypothesis that P NP . As a first example we consider the 3-Satisfiability problem (3SAT) with two different distributions on 3CNF formulas. We show that 3SAT with a version of the standard distribution is truth-table reducible but not many-one reducible to 3SAT with a less redundant distribution unless P = NP . We extend this separation result and define a distributional complexity class C with the following properties: (1) C is a subclass of DistNP, this relation is proper unless P = NP. (2) C contains DistP, but it is not contained in AveP unless DistNP \subseteq AveZPP. (3) C has a p m -complete set. (4) C has a p tt -complete set that is not p m -complete unless P = NP. This shows that under the assumption that PNP, the two completeness notions differ on some nontrivial subclass of DistNP.  相似文献   

16.
In this paper, we define the straight segment approximation problem (SSAP) for a given digital arc as that of locating a minimum subset of vertices on the arc such that they form a connected sequence of digital straight segments. Sharaiha (Ph.D. thesis, Imperial College, London, 1991) introduced the compact chord property, and proved its equivalence to Rosenfeld′s chord property (IEEE Trans. Comput. C-23, 1974, 1264-1269). The SSAP is now constrained by the compact chord property, which offers a more convenient geometric representation than the chord property. We develop an O(n2) optimal algorithm for the solution of the SSAP using integer arithmetic. A relaxation of the problem is also presented such that the optimal number of vectors can be reduced according to a user definition. The original algorithm is adapted for the optimal solution of the relaxed problem. An extension to the relaxed problem is also addressed which finds a minimum level of relaxation such that the optimal number of vectors cannot be reduced.  相似文献   

17.
We propose a hybrid algorithm based on the ant colony optimization (ACO) meta-heuristic, in conjunction with four well-known elimination rules, to tackle the NPNP-hard single-machine scheduling problem to minimize the total job tardiness. The hybrid algorithm has the same running time as that of ACO. We conducted extensive computational experiments to test the performance of the hybrid algorithm and ACO. The computational results show that the hybrid algorithm can produce optimal or near-optimal solutions quickly, and its performance compares favourably with that of ACO for handling standard instances of the problem.  相似文献   

18.
Optimal Time-Critical Scheduling via Resource Augmentation   总被引:1,自引:0,他引:1  
Phillips  Stein  Torng  Wein 《Algorithmica》2002,32(2):163-200
We consider two fundamental problems in dynamic scheduling: scheduling to meet deadlines in a preemptive multiprocessor setting, and scheduling to provide good response time in a number of scheduling environments. When viewed from the perspective of traditional worst-case analysis, no good on-line algorithms exist for these problems, and for some variants no good off-line algorithms exist unless P = NP . We study these problems using a relaxed notion of competitive analysis, introduced by Kalyanasundaram and Pruhs, in which the on-line algorithm is allowed more resources than the optimal off-line algorithm to which it is compared. Using this approach, we establish that several well-known on-line algorithms, that have poor performance from an absolute worst-case perspective, are optimal for the problems in question when allowed moderately more resources. For optimization of average flow time, these are the first results of any sort, for any NP -hard version of the problem, that indicate that it might be possible to design good approximation algorithms.  相似文献   

19.
This paper presents some novel theoretical results as well as practical algorithms and computational procedures on fuzzy relation equations (FRE). These results refine and improve what has already been reported in a significant manner. In the previous paper, the authors have already proved that the problem of solving the system of fuzzy relation equations is an NP-hard problem. Therefore, it is practically impossible to determine all minimal solutions for a large system if PNP. In this paper, an existence theorem is proven: there exists a special branch-point-solution that is greater than all minimal solutions and less than the maximum solution. Such branch-point-solution can be calculated based on the solution-base-matrix. Furthermore, a procedure for determining all branch-point-solutions is designed. We also provide efficient algorithms which is capable of determining as well as searching for certain types of minimal solutions. We have thus obtained: (1) a fast algorithm to determine whether a solution is a minimal solution, (2) the algorithm to search for the minimal solutions that has at least a minimum value at a component in the solution vector, and (3) the procedure of determining if a system of fuzzy relation equations has the unique minimal solution. Other properties are also investigated.  相似文献   

20.
We consider transactional memory contention management in the context of balanced workloads, where if a transaction is writing, the number of write operations it performs is a constant fraction of its total reads and writes. We explore the theoretical performance boundaries of contention management in balanced workloads from the worst-case perspective by presenting and analyzing two new polynomial time contention management algorithms. We analyze the performance of a contention management algorithm by comparison with an optimal offline contention management algorithm to provide a competitive ratio. The first algorithm Clairvoyant is $O(\sqrt{s})$ -competitive, where s is the number of shared resources. This algorithm depends on explicitly knowing the conflict graph at each time step of execution. The second algorithm Non-Clairvoyant is $O(\sqrt{s} \cdot \log n)$ -competitive, with high probability, which is only a O(log?n) factor worse, but does not require knowledge of the conflict graph, where n is the number of transactions. Both of these algorithms are greedy. We also prove that the performance of Clairvoyant is close to optimal, since there is no polynomial time contention management algorithm for the balanced transaction scheduling problem that is better than $O((\sqrt{s})^{1-\varepsilon})$ -competitive for any constant ε>0, unless NP?ZPP. To our knowledge, these results are significant improvements over the best previously known O(s) competitive ratio bound.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号