首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider the problem of finding all minimal transversals of a hypergraph HV2, given by an explicit list of its hyperedges. We give a new decomposition technique for solving the problem with the following advantages: (i) Global parallelism: for certain classes of hypergraphs, e.g., hypergraphs of bounded edge size, and any given integer k, the algorithm outputs k minimal transversals of H in time bounded by polylog(|V|,|H|,k) assuming poly(|V|,|H|,k) number of processors. Except for the case of graphs, none of the previously known algorithms for solving the same problem exhibit this feature. (ii) Using this technique, we also obtain new results on the complexity of generating minimal transversals for new classes of hypergraphs, namely hypergraphs of bounded dual-conformality, and hypergraphs in which every edge intersects every minimal transversal in a bounded number of vertices.  相似文献   

2.
Choosing the best location for starting a business or expanding an existing enterprize is an important issue. A number of location selection problems have been discussed in the literature. They often apply the Reverse Nearest Neighbor as the criterion for finding suitable locations. In this paper, we apply the Average Distance as the criterion and propose the so-called k-most suitable locations (k-MSL) selection problem. Given a positive integer k and three datasets: a set of customers, a set of existing facilities, and a set of potential locations. The k-MSL selection problem outputs k locations from the potential location set, such that the average distance between a customer and his nearest facility is minimized. In this paper, we formally define the k-MSL selection problem and show that it is NP-hard. We first propose a greedy algorithm which can quickly find an approximate result for users. Two exact algorithms are then proposed to find the optimal result. Several pruning rules are applied to increase computational efficiency. We evaluate the algorithms’ performance using both synthetic and real datasets. The results show that our algorithms are able to deal with the k-MSL selection problem efficiently.  相似文献   

3.
Maximal clique enumeration is a fundamental problem in graph theory and has been extensively studied. However, maximal clique enumeration is time-consuming in large graphs and always returns enormous cliques with large overlaps. Motivated by this, in this paper, we study the diversified top-k clique search problem which is to find top-k cliques that can cover most number of nodes in the graph. Diversified top-k clique search can be widely used in a lot of applications including community search, motif discovery, and anomaly detection in large graphs. A naive solution for diversified top-k clique search is to keep all maximal cliques in memory and then find k of them that cover most nodes in the graph by using the approximate greedy max k-cover algorithm. However, such a solution is impractical when the graph is large. In this paper, instead of keeping all maximal cliques in memory, we devise an algorithm to maintain k candidates in the process of maximal clique enumeration. Our algorithm has limited memory footprint and can achieve a guaranteed approximation ratio. We also introduce a novel light-weight \(\mathsf {PNP}\)-\(\mathsf {Index}\), based on which we design an optimal maximal clique maintenance algorithm. We further explore three optimization strategies to avoid enumerating all maximal cliques and thus largely reduce the computational cost. Besides, for the massive input graph, we develop an I/O efficient algorithm to tackle the problem when the input graph cannot fit in main memory. We conduct extensive performance studies on real graphs and synthetic graphs. One of the real graphs contains 1.02 billion edges. The results demonstrate the high efficiency and effectiveness of our approach.  相似文献   

4.
In this paper, we propose an implicit gradient descent algorithm for the classic k-means problem. The implicit gradient step or backward Euler is solved via stochastic fixed-point iteration, in which we randomly sample a mini-batch gradient in every iteration. It is the average of the fixed-point trajectory that is carried over to the next gradient step. We draw connections between the proposed stochastic backward Euler and the recent entropy stochastic gradient descent for improving the training of deep neural networks. Numerical experiments on various synthetic and real datasets show that the proposed algorithm provides better clustering results compared to k-means algorithms in the sense that it decreased the objective function (the cluster) and is much more robust to initialization.  相似文献   

5.
In a previous paper in this journal, the authors described an implicit enumeration algorithm for the all integer programming problem. In this paper, a specialization of the aforementioned algorithm for 0–1 integer programming problems is developed. The computational efficiency of this specialization is investigated by solving a set of test problems, using a computer code of the algorithm written for this purpose.  相似文献   

6.
Given an n-node edge-weighted graph and a subset of k terminal nodes, the NP-hard (weighted) Steiner tree problem is to compute a minimum-weight tree which spans the terminals. All the known algorithms for this problem which improve on trivial O(1.62 n )-time enumeration are based on dynamic programming, and require exponential space. Motivated by the fact that exponential-space algorithms are typically impractical, in this paper we address the problem of designing faster polynomial-space algorithms. Our first contribution is a simple O((27/4) k n O(logk))-time polynomial-space algorithm for the problem. This algorithm is based on a variant of the classical tree-separator theorem: every Steiner tree has a node whose removal partitions the tree in two forests, containing at most 2k/3 terminals each. Exploiting separators of logarithmic size which evenly partition the terminals, we are able to reduce the running time to $O(4^{k}n^{O(\log^{2} k)})$ . This improves on trivial enumeration for roughly k<n/3, which covers most of the cases of practical interest. Combining the latter algorithm (for small k) with trivial enumeration (for large k) we obtain a O(1.59 n )-time polynomial-space algorithm for the weighted Steiner tree problem. As a second contribution of this paper, we present a O(1.55 n )-time polynomial-space algorithm for the cardinality version of the problem, where all edge weights are one. This result is based on a improved branching strategy. The refined branching is based on a charging mechanism which shows that, for large values of k, convenient local configurations of terminals and non-terminals exist. The analysis of the algorithm relies on the Measure & Conquer approach: the non-standard measure used here is a linear combination of the number of nodes and number of non-terminals. Using a recent result in Nederlof (International colloquium on automata, languages and programming (ICALP), pp. 713–725, 2009), the running time can be reduced to O(1.36 n ). The previous best algorithm for the cardinality case runs in O(1.42 n ) time and exponential space.  相似文献   

7.
In the exact matching problem we are given a graph G, some of whose edges are colored red, and a positive integer k. The goal is to determine if G has a perfect matching, exactly k edges of which are red. More generally if the matching number of G is m=m(G), the goal is to find a matching with m edges, exactly k edges of which are red, or determine that no such matching exists. This problem is one of the few remaining problems that have efficient randomized algorithms (in fact, this problem is in RNC), but for which no polynomial time deterministic algorithm is known. Our first result shows that, in a sense, this problem is as close to being in P as one can get. We give a polynomial time deterministic algorithm that either correctly decides that no maximum matching has exactly k red edges, or exhibits a matching with m(G)?1 edges having exactly k red edges. Hence, the additive error is one. We also present an efficient algorithm for the exact matching problem in families of graphs for which this problem is known to be tractable. We show how to count the number of exact perfect matchings in K 3,3-minor free graphs (these include all planar graphs as well as many others) in O(n 3.19) worst case time. Our algorithm can also count the number of perfect matchings in K 3,3-minor free graphs in O(n 2.19) time.  相似文献   

8.
k-tuple domination in graphs   总被引:1,自引:0,他引:1  
In a graph G, a vertex is said to dominate itself and all of its neighbors. For a fixed positive integer k, the k-tuple domination problem is to find a minimum sized vertex subset in a graph such that every vertex in the graph is dominated by at least k vertices in this set. The current paper studies k-tuple domination in graphs from an algorithmic point of view. In particular, we give a linear-time algorithm for the k-tuple domination problem in strongly chordal graphs, which is a subclass of chordal graphs and includes trees, block graphs, interval graphs and directed path graphs. We also prove that the k-tuple domination problem is NP-complete for split graphs (a subclass of chordal graphs) and for bipartite graphs.  相似文献   

9.
This paper studies the problem of constructing the workforce schedules of an aircraft maintenance company. The problem involves both a staffing and a scheduling decision. We propose an enumerative algorithm with bounding in which each node of the enumeration tree represents a mixed integer linear problem (MILP). We reformulate the MILP such that it becomes tractable for commercial MILP solvers. Extensive computational tests on 40 instances that are derived from a real-life setting indicate that the algorithm is capable of finding close-to-optimal solutions.  相似文献   

10.
We present an algorithm to compute a complete set of efficient solutions for the biobjective integer minimum cost flow problem. We use the two phase method, with a parametric network simplex algorithm in phase 1 to compute all non-dominated extreme points. In phase 2, the remaining non-dominated points (non-extreme supported and non-supported) are computed using a k best flow algorithm on single-objective weighted sum problems.  相似文献   

11.
Given an undirected graph G=(V,E), the Graph Coloring Problem (GCP) consists in assigning a color to each vertex of the graph G in such a way that any two adjacent vertices are assigned different colors, and the number of different colors used is minimized. State-of-the-art algorithms generally deal with the explicit constraints in GCP: any two adjacent vertices should be assigned different colors, but do not specially deal with the implicit constraints between non-adjacent vertices implied by the explicit constraints. In this paper, we propose an exact algorithm with learning for GCP which exploits the implicit constraints using propositional logic. Our algorithm is compared with several exact algorithms among the best in the literature. The experimental results show that our algorithm outperforms other algorithms on many instances. Specifically, our algorithm allows to close the open DIMACS instance 4-Fullins_5.  相似文献   

12.
A common way of computing all efficient (Pareto optimal) solutions for a biobjective combinatorial optimisation problem is to compute first the extreme efficient solutions and then the remaining, non-extreme solutions. The second phase, the computation of non-extreme solutions, can be based on a “k-best” algorithm for the single-objective version of the problem or on the branch-and-bound method. A k-best algorithm computes the k-best solutions in order of their objective values. We compare the performance of these two approaches applied to the biobjective minimum spanning tree problem. Our extensive computational experiments indicate the overwhelming superiority of the k-best approach. We propose heuristic enhancements to this approach which further improve its performance.  相似文献   

13.
14.
Cohen  Kaplan 《Algorithmica》2002,32(3):459-466
We give an integer programming formulation of the paging problem with varying sizes and fetching costs. We use this formulation to provide an alternative proof that a variant of the algorithm greedy-dual-size previously considered in [4] and [5] is (k+1)/(k-h+1)competitive against the optimal strategy with cache size h≤ k . Our proof provides further insights to greedy-dual-size. We also indicate how the same integer programming formulation has been recently used [1], [2] to obtain approximation algorithms to the NP-complete ``offline'' problem.  相似文献   

15.
We present a multidisciplinary solution to the problems of anonymous microaggregation and clustering, illustrated with two applications, namely privacy protection in databases, and private retrieval of location-based information. Our solution is perturbative, is based on the same privacy criterion used in microdata k-anonymization, and provides anonymity through a substantial modification of the Lloyd algorithm, a celebrated quantization design algorithm, endowed with numerical optimization techniques.Our algorithm is particularly suited to the important problem of k-anonymous microaggregation of databases, with a small integer k representing the number of individual respondents indistinguishable from each other in the published database. Our algorithm also exhibits excellent performance in the problem of clustering or macroaggregation, where k may take on arbitrarily large values. We illustrate its applicability in this second, somewhat less common case, by means of an example of location-based services. Specifically, location-aware devices entrust a third party with accurate location information. This party then uses our algorithm to create distortion-optimized, size-constrained clusters, where k nearby devices share a common centroid location, which may be regarded as a distorted version of the original one. The centroid location is sent back to the devices, which use it when contacting untrusted location-based information providers, in lieu of the exact home location, to enforce k-anonymity.We compare the performance of our novel algorithm to the state-of-the-art microaggregation algorithm MDAV, on both synthetic and standardized real data, which encompass the cases of small and large values of k. The most promising aspect of our proposed algorithm is its capability to maintain the same k-anonymity constraint, while outperforming MDAV by a significant reduction in data distortion, in all the cases considered.  相似文献   

16.
We investigate the unbalanced cut problems. A cut (A,B) is called unbalanced if the size of its smaller side is at most k (called k-size) or exactly k (called Ek-size), where k is an input parameter. We consider two closely related unbalanced cut problems, in which the quality of a cut is measured with respect to the sparsity and the conductance, respectively. We show that even if the input graph is restricted to be a tree, the Ek-Sparsest Cut problem (to find an Ek-size cut with the minimum sparsity) is still NP-hard. We give a bicriteria approximation algorithm for the k-Sparsest Cut problem (to find a k-size cut with the minimum sparsity), which outputs a cut whose sparsity is at most O(logn) times the optimum and whose smaller side has size at most O(logn)k. As a consequence, this leads to a (O(logn),O(logn))-bicriteria approximation algorithm for the Min k-Conductance problem (to find a k-size cut with the minimum conductance).  相似文献   

17.
We present a new mixed explicit implicit time stepping scheme for solving the linear advection equation on a Cartesian cut cell mesh. We use a standard second-order explicit scheme on Cartesian cells away from the embedded boundary. On cut cells, we use an implicit scheme for stability. This approach overcomes the small cell problem—that standard schemes are not stable on the arbitrarily small cut cells—while keeping the cost fairly low. We examine several approaches for coupling the schemes in one dimension. For one of them, which we refer to as flux bounding, we can show a TVD result for using a first-order implicit scheme. We also describe a mixed scheme using a second-order implicit scheme and combine both mixed schemes by an FCT approach to retain monotonicity. In the second part of this paper, extensions of the second-order mixed scheme to two and three dimensions are discussed and the corresponding numerical results are presented. These indicate that this mixed scheme is second-order accurate in \(L^1\) and between first- and second-order accurate along the embedded boundary in two and three dimensions.  相似文献   

18.
In this paper we consider the vertex ranking problem of weighted trees. We show that this problem is strongly NP-hard. We also give a polynomial-time reduction from the problem of vertex ranking of weighted trees to the vertex ranking of (simple) chordal graphs, which proves that the latter problem is NP-hard. In this way we solve an open problem of Aspvall and Heggernes. We use this reduction and the algorithm of Bodlaender et al.'s for vertex ranking of partial k-trees to give an exact polynomial-time algorithm for vertex ranking of a tree with bounded and integer valued weight functions. This algorithm serves as a procedure in designing a PTAS for weighted vertex ranking problem of trees with bounded weight functions.  相似文献   

19.
20.
We propose a new hierarchical tree algorithm with high adaptivity to various particle distributions for long-range force calculations. This algorithm divides parent cells into k daughter cells using the k-means algorithm. The tree structure provided by this algorithm is independent of the coordinate system used. This method also includes a unique procedure for determining cell sizes adjusted to particle distributions.We investigated the characteristics of the tree structure and the effect on the long-range force calculation performance of various branching ratios k. The results of numerical experiments using various particle distributions showed that the number of interactions between particles and cells grows with k, but the number of distance evaluations between particles and cells decreased when k is around 5. We can therefore select an optimized value of k according to the characteristics of the problem to be analyzed. Comparing the algorithm to Barnes-Hut treecode using gravitational calculations at the same error level, we found that the calculation cost could be decreased remarkably.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号