首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 125 毫秒
1.
Local-search Extraction of MUSes   总被引:2,自引:0,他引:2  
SAT is probably one of the most-studied constraint satisfaction problems. In this paper, a new hybrid technique based on local search is introduced in order to approximate and extract minimally unsatisfiable subformulas (in short, MUSes) of unsatisfiable SAT instances. It is based on an original counting heuristic grafted to a local search algorithm, which explores the neighborhood of the current interpretation in an original manner, making use of a critical clause concept. Intuitively, a critical clause is a falsified clause that becomes true thanks to a local search flip only when some other clauses become false at the same time. In the paper, the critical clause concept is investigated. It is shown to be the cornerstone of the efficiency of our approach, which outperforms competing ones to compute MUSes, inconsistent covers and sets of MUSes, most of the time.  相似文献   

2.
黄金贵  王胜春 《软件学报》2018,29(12):3595-3603
布尔可满足性问题(SAT)是指对于给定的布尔公式,是否存在一个可满足的真值指派.这是第1个被证明的NP完全问题,一般认为不存在多项式时间算法,除非P=NP.学者们大都研究了子句长度不超过k的SAT问题(k-SAT),从全局搜索到局部搜索,给出了大量的相对有效算法,包括随机算法和确定算法.目前,最好算法的时间复杂度不超过O((2-2/kn),当k=3时,最好算法时间复杂度为O(1.308n).而对于更一般的与子句长度k无关的SAT问题,很少有文献涉及.引入了一类可分离SAT问题,即3-正则可分离可满足性问题(3-RSSAT),证明了3-RSSAT是NP完全问题,给出了一般SAT问题3-正则可分离性的O(1.890n)判定算法.然后,利用矩阵相乘算法的研究成果,给出了3-RSSAT问题的O(1.890n)精确算法,该算法与子句长度无关.  相似文献   

3.
Sch?ning 《Algorithmica》2008,32(4):615-623
Abstract. A simple probabilistic algorithm for solving the NP-complete problem k -SAT is reconsidered. This algorithm follows a well-known local-search paradigm: randomly guess an initial assignment and then, guided by those clauses that are not satisfied, by successively choosing a random literal from such a clause and changing the corresponding truth value, try to find a satisfying assignment. Papadimitriou [11] introduced this random approach and applied it to the case of 2-SAT, obtaining an expected O(n 2 ) time bound. The novelty here is to restart the algorithm after 3n unsuccessful steps of local search. The analysis shows that for any satisfiable k -CNF formula with n variables the expected number of repetitions until a satisfying assignment is found this way is (2⋅ (k-1)/ k) n . Thus, for 3-SAT the algorithm presented here has a complexity which is within a polynomial factor of (\frac 4 3 ) n . This is the fastest and also the simplest among those algorithms known up to date for 3-SAT achieving an o(2 n ) time bound. Also, the analysis is quite simple compared with other such algorithms considered before.  相似文献   

4.
We introduce a new framework for logic-based probabilistic modeling called constraint-based probabilistic modeling which defines CBPMs (constraint-based probabilistic models) , i.e. conditional joint distributions P(⋅∣KB) over independent propositional variables constrained by a knowledge base KB consisting of clauses. We first prove that generative models such as PCFGs and discriminative models such as CRFs have equivalent CBPMs as long as they are discrete. We then prove that CBPMs in infinite domains exist which give existentially closed logical consequences of KB probability one. Finally we derive an EM algorithm for the parameter learning of CBPMs and apply it to statistical abduction.  相似文献   

5.
In this paper, we address the sonar-based nagivation of mobile robots. The extended Kalman filtering (EKF) technique is considered, but from a deterministic, not a stochastic, point of view. For this problem, we present results on the robustness of the non-linear discrete-time observation scheme. This work is strongly based on our previous paper on continuous-time EKF. Here we provide the discrete-time counterpart of these results. The original feature of our approach is that the region-of-convergence question is posed in its complete non-linear framework, that is, considering the dynamics not only of the estimation error?ζ, but also of the covariance matrix P. In this way, the approach followed makes the treatment less conservative and improves the convergence analysis. In discrete-time new problems and difficulties appeared for proving convergence: the Jacobians of f and h, A k and C k respectively, are evaluated at different trajectories and the exponential weighting factor?α?has a multiplicative effect on A k instead of an additive effect that results in the continuous case. These problems make it more difficult to prove that the Lyapunov function V is decreasing. We solved it by adapting some ideas from Safonov. The proposed ideas were tested successfully on simulation experiments of a mobile platform.  相似文献   

6.
We consider the problem of maintaining polynomial and exponential decay aggregates of a data stream, where the weight of values seen from the stream diminishes as time elapses. These types of aggregation were discussed by Cohen and Strauss (J. Algorithms 1(59), 2006), and can be used in many applications in which the relative value of streaming data decreases since the time the data was seen. Some recent work and space efficient algorithms were developed for time-decaying aggregations, and in particular polynomial and exponential decaying aggregations. All of the work done so far has maintained multiplicative approximations for the aggregates. In this paper we present the first O(log N) space algorithm for the polynomial decay under a multiplicative approximation, matching a lower bound. In addition, we explore and develop algorithms and lower bounds for approximations allowing an additive error in addition to the multiplicative error. We show that in some cases, allowing an additive error can decrease the amount of space required, while in other cases we cannot do any better than a solution without additive error.  相似文献   

7.
The satisfiability problem is a well known NP-complete problem. In artificial intelligence, solving the satisfiability problem is called mechanical theorem proving. One of those mechanical theorem proving methods is the resolution principle which was invented by J.R. Robinson. In this paper, we shall show how an algorithm based upon the resolution principle can be analyzed. Letn andr 0 denote the numbers of variables and input clauses respectively. LetP 0 denote the probability that a variable appears positively, or negatively, in a clause. Our analysis shows that the expected total number of clauses processed by our algorithm isO(n+r 0) ifP 0 is a constant,r 0 is polynomially related withn, andn is large.This research was supported partially by the National Science Council of the Republic of China under the Grant NSC 78-0408-E007-06.  相似文献   

8.
We present a directed Markov random field (MRF) model that combines n‐gram models, probabilistic context‐free grammars (PCFGs), and probabilistic latent semantic analysis (PLSA) for the purpose of statistical language modeling. Even though the composite directed MRF model potentially has an exponential number of loops and becomes a context‐sensitive grammar, we are nevertheless able to estimate its parameters in cubic time using an efficient modified Expectation‐Maximization (EM) method, the generalized inside–outside algorithm, which extends the inside–outside algorithm to incorporate the effects of the n‐gram and PLSA language models. We generalize various smoothing techniques to alleviate the sparseness of n‐gram counts in cases where there are hidden variables. We also derive an analogous algorithm to find the most likely parse of a sentence and to calculate the probability of initial subsequence of a sentence, all generated by the composite language model. Our experimental results on the Wall Street Journal corpus show that we obtain significant reductions in perplexity compared to the state‐of‐the‐art baseline trigram model with Good–Turing and Kneser–Ney smoothing techniques.  相似文献   

9.
Randomised restarted search in ILP   总被引:1,自引:0,他引:1  
Recent statistical performance studies of search algorithms in difficult combinatorial problems have demonstrated the benefits of randomising and restarting the search procedure. Specifically, it has been found that if the search cost distribution of the non-restarted randomised search exhibits a slower-than-exponential decay (that is, a “heavy tail”), restarts can reduce the search cost expectation. We report on an empirical study of randomised restarted search in ILP. Our experiments conducted on a high-performance distributed computing platform provide an extensive statistical performance sample of five search algorithms operating on two principally different classes of ILP problems, one represented by an artificially generated graph problem and the other by three traditional classification benchmarks (mutagenicity, carcinogenicity, finite element mesh design). The sample allows us to (1) estimate the conditional expected value of the search cost (measured by the total number of clauses explored) given the minimum clause score required and a “cutoff” value (the number of clauses examined before the search is restarted), (2) estimate the conditional expected clause score given the cutoff value and the invested search cost, and (3) compare the performance of randomised restarted search strategies to a deterministic non-restarted search. Our findings indicate striking similarities across the five search algorithms and the four domains, in terms of the basic trends of both the statistics (1) and (2). Also, we observe that the cutoff value is critical for the performance of the search algorithm, and using its optimal value in a randomised restarted search may decrease the mean search cost (by several orders of magnitude) or increase the mean achieved score significantly with respect to that obtained with a deterministic non-restarted search. Editors: Rui Camacho  相似文献   

10.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号