首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Adaptive control is considered for a two-dimensional linear discrete-time plant with randomly drifting parameters. The certainty equivalent minimum variance control law along with the projection-like identification algorithm are used. The stability of the parameter estimates and exponential stability of the closed-loop system are proved in the absence of any persistent excitation assumption.  相似文献   

2.
Symbolic trajectory evaluation provides a means to formally verify properties of a sequential system by a modified form of symbolic simulation. The desired system properties are expressed in a notation combining Boolean expressions and the temporal logic next-time operator. In its simplest form, each property is expressed as an assertion [AC], where the antecedentA expresses some assumed conditions on the system state over a bounded time period, and the consequentC expresses conditions that should result. A generalization allows simple invariants to be established and proven automatically.The verifier operates on system models in which the state space is ordered by information content. By suitable restrictions to the specification notation, we guarantee that for every trajectory formula, there is a unique weakest state trajectory that satisfies it. Therefore, we can verify an assertion [AC] by simulating the system over the weakest trajectory forA and testing adherence toC. Also, establishing invariants correspond to simple fixed point calculations.This paper presents the general theory underlying symbolic trajectory evaluation. It also illustrates the application of the theory to the taks of verifying switch-level circuits as well as more abstract implementations.This research was supported by the Defense Advanced Research Projects Agency, ARPA Order Number 4976, by the National Science Foundation, under grant number MIP-8913667, by operating grant OGPO 109688 from the Natural Sciences and Engineering Research Council of Canada, and by a fellowship from the British Columbia Advanced Systems Institute.  相似文献   

3.
基于AES和DES算法的可重构S盒硬件实现   总被引:5,自引:0,他引:5  
密码芯片的可重构性不仅可以提高安全性,而且可以提高芯片适应性.S盒是很多密码算法中的重要部件,其可重构性对密码芯片的可重构性有重大影响.文章在分析AES和DES算法中S盒硬件实现方法的基础上,利用硬件复用和重构的概念和相关技术,提出了一种可重构S盒(RC-S)结构及其实现方法.实验结果表明RC-S可用于AES算法和DES的硬件实现.基于RC-S的AES、DES密码模块规模分别是AES、DES模块的0.81/1.13,性能分别是DES/AES的0.79/0.94.  相似文献   

4.
Summary This paper deals with the statistical efficiency of estimation methods for passage times in closed, multiclass networks of queues with priorities. Informally, a passage time is the time for a job to traverse a portion of the network. Such quantities are important in computer and communication system models, and in this context, quantities other than mean values are of interest. We consider here the efficiencies of the marked job method for passage time simulation (based on the tracking of a distinguished job) and the decomposition method in which observed passage times for all of the jobs enter in the construction of point and interval estimates. We show that the decomposition method is superior in that, for simulations of equal length, it produces tighter confidence intervals. We also calculate theoretical values for variance constants entering into central limit theorems used to obtain confidence intervals for mean passage times. These results provide a means of quantifying the relative efficiency of the decomposition method.  相似文献   

5.
The AI methodology of qualitative reasoning furnishes useful tools to scientists and engineers who need to deal with incomplete system knowledge during design, analysis, or diagnosis tasks. Qualitative simulators have a theoretical soundness guarantee; they cannot overlook any concrete equation implied by their input. On the other hand, the basic qualitative simulation algorithms have been shown to suffer from the incompleteness problem; they may allow non-solutions of the input equation to appear in their output. The question of whether a simulator with purely qualitative input which never predicts spurious behaviors can ever be achieved by adding new filters to the existing algorithm has remained unanswered. In this paper, we show that, if such a sound and complete simulator exists, it will have to be able to handle numerical distinctions with such a high precision that it must contain a component that would better be called a quantitative, rather than qualitative reasoner. This is due to the ability of the pure qualitative format to allow the exact representation of the members of a rich set of numbers.  相似文献   

6.
We present a framework for an automated generation of exact search tree algorithms for NP-hard problems. The purpose of our approach is twofold—rapid development and improved upper bounds. Many search tree algorithms for various problems in the literature are based on complicated case distinctions. Our approach may lead to a much simpler process of developing and analyzing these algorithms. Moreover, using the sheer computing power of machines it may also lead to improved upper bounds on search tree sizes (i.e., faster exact solving algorithms) in comparison with previously developed hand-made search trees. Among others, such an example is given with the NP-complete Cluster Editing problem (also known as Correlation Clustering on complete unweighted graphs), which asks for the minimum number of edge additions and deletions to create a graph which is a disjoint union of cliques. The hand-made search tree for Cluster Editing had worst-case size O(2.27k), which now is improved to O(1.92k) due to our new method. (Herein, k denotes the number of edge modifications allowed.)  相似文献   

7.
A new definition is given for the average growth of a functionf: * N with respect to a probability measure on * This allows us to define meaningful average distributional complexity classes for arbitrary time bounds (previously, one could not guarantee arbitrary good precision). It is shown that, basically, only the ranking of the inputs by decreasing probabilities is of importance.To compare the average and worst case complexity of problems, we study average complexity classes defined by a time bound and a bound on the complexity of possible distributions. Here, the complexity is measured by the time to compute the rank functions of the distributions. We obtain tight and optimal separation results between these average classes. Also, the worst case classes can be embedded into this hierarchy. They are shown to be identical to average classes with respect to distributions of exponential complexity.  相似文献   

8.
More resources are spent on maintaining software than for its development. Maintenance costs for large scale software systems can amount to somewhere between 40 and 67% of the total system life cycle cost. It is therefore important to manage maintenance costs, and to balance costs with benefits. Frequently this task is approached, at least in the literature, merely as a software cost estimation problem. Unfortunately, the creation of effort estimation models for maintenance – a primary requisite for cost calculation – has not yet been satisfactorily addressed. At the same time, project managers do not estimate costs first, but instead prioritize maintenance projects, trying to determine which projects to carry out (first) within their fixed budgets and resource capabilities. This essentially means that cost estimation is done qualitatively first before formal cost estimation techniques are employed. Recognizing the problems associated with standard, regression based estimation models, and focusing on the needs of software project managers, this research studied the process of project prioritization as an expert problem solving and decision making task, through concurrently taken (think aloud) protocols. Analysis of these protocols revealed that experts rarely make use of formal mathematical models to determine project priorities or resource needs, such as COCOMO or FPA, although project size is a key determinant of a project's priority. Instead, estimators qualitatively consider cost or value, urgency, and difficulty of a maintenance task, then prioritize projects accordingly, followed by a decision concerning further treatment of the problem. The process employs case based reasoning and the use of heuristics. While different experts may use different strategies, there exists great overlap in their overall prioritization procedure.  相似文献   

9.
The minimal entailment Min has been characterized elsewhere by where Cn is the first-order consequence operation, P is a set of clauses (indefinite deductive data base; in short: a data base), is a clause (a query), and Pos is the set of positive (that is, bodiless) ground clauses. In this paper, we address the problem of the computational feasibility of criterion (1). Our objective is to find a query evaluation algorithm that decides P Min by what we call indefinite modeling, without actually computing all ground positive consequences of P and P {}. For this purpose, we introduce the concept of minimal indefinite Herbrand model MP of P, which is defined as the set of subsumption-minimal ground positive clauses provable from P. The algorithm first computes MP by finding the least fixed-point of an indefinite consequence operator TIP. Next, the algorithm verifies whether every ground positive clause derivable from MP {} by one application of the parallel positive resolution rule (in short: the PPR rule) is subsumed by an element of MP. We prove that the PPR rule, which can derive only positive clauses, is positively complete, that is, every positive clause provable from a data base P is derivable from P by means of subsumption and finitely many applications of PPR. From this we conclude that the presented algorithm is partially correct and that it eventually halts if both P and MP are finite. Moreover, we indicate how the algorithm can be modified to handle data bases with infinite indefinite Herbrand models. This modification leads to a concept of universal model that allows for nonground clauses in its Herbrand base and appears to be a good candidate for representation of indefinite deductive data bases.  相似文献   

10.
This paper describes the redesign of a systems engineering language called . This is an engineering language designed to specify and analyse industrial systems. The main objective of this redesign was to enable mathematical reasoning about specifications. We discuss the original language, the requirements and design decisions, and the resulting syntax and semantics of the new version of , called . In particular, we elaborate on semantical aspects of s time model.  相似文献   

11.
dBus-array(k,n) is an n-dimensional processor array of kn nodes connected via kn–1 dBuses. A dBus is a unidirectional bus which receives signals from a set of k nodes (input set), and transmits signals to a different set of k nodes (output set). Two optical implementations of the dBus-array(k,n) are discussed. One implementation uses the wavelength division multiplexing as in the wavelength division multiple access channel hypercube WMCH [7]. WMCH(k,n) and dBus-array(k,n) have the same diameter and about the same average internode distance, while the dBus-array requires only one tunable transmitter/receiver per node, compared to n tunable transmitters/receivers per node for the WMCH. The other implementation uses one fixed-wavelength transmitter/receiver per node and the dilated slipped banyan switching network (DSB) [17] to combine time division and wavelength division multiplexing.  相似文献   

12.
Ideas from random graph theory are used to give an heuristic argument that associative memory structure depends discontinuously on pattern recognition ability. This argument suggests that there may be a certain minimal size for intelligent systems.  相似文献   

13.
This article is the twenty-seventh of a series of articles discussing various open research problems in automated reasoning. The problem proposed for research asks one to find criteria that an automated reasoning program can apply to determine whether to attack a given question with reasoning by analogy. The imprecise term reasoning by analogy refers to a type of reasoning in which the type of proof being sought is sharply influenced by the style of proof that was successfully used to prove related theorems.This work was supported by the Office of Scientific Computing, U.S. Department of Energy, under Contract W-31-109-Eng-38.  相似文献   

14.
This article is a report on research in progress into the structure of finite diagrams of intuitionistic propositional logic with the aid of automated reasoning systems for larger calculations. Afragment of a propositional logic is the set of formulae built up from a finite number of propositional variables by means of a number of connectives of the logic, among which possibly non-standard ones like ¬¬ or which are studied here. Thediagram of that fragment is the set of equivalence classes of its formulae partially ordered by the derivability relation. N.G. de Bruijn's concept of exact model has been used to construct subdiagrams of the [p, q, , , ¬]-fragment.  相似文献   

15.
We describe a 3D surface-tracking algorithm that is used to detect the interior laminar surfaces of a solid shell. Each of these surfaces is called a peel. Successive peels may be generated, thus representing the solid shell by its tangential layers. This algorithm is based on voxel surface-tracking methods and solves the problems associated with transforming a surface-tracking algorithm into a brain peeler. We also discuss the properties of the voxel surfaces produced by this algorithm. Using the connectivity properties of these objects, we are able to convert voxel representations into polyhedral representations without human interaction. We illustrate this work with a high-resolution reconstruction of a monkey visual cortex. Additional application domains of this work are in areas in which there is a natural laminar structure to a 3D solid, such as in geophysics (earth strata).Supported by System Development Foundation, AFOSR 85-0341, and the Nathan S. Kline Psychiatric Research Center  相似文献   

16.
We consider the relation between knowledge and certainty, where a fact isknown if it is true at all worlds an agent considers possible and iscertain if it holds with probability 1. We identify certainty with belief, interpreted probabilistically. We show that if we assume one fixed probability assignment, then the logic KD45, which has been identified as perhaps the most appropriate for belief, provides a complete axiomatization for reasoning about certainty. Just as an agent may believe a fact although is false, he may be certain that a fact is true although is false. However, it is easy to see that an agent can have such false (probabilistic) beliefs only at a set of worlds of probability 0. If we restrict attention to structures where all worlds have positive probability, then S5 provides a complete axiomatization. If we consider a more general setting, where there might be a different probability assignment at each world, then by placing appropriate conditions on thesupport of the probability function (the set of worlds which have non-zero probability), we can capture many other well-known modal logics, such as T and S4. Finally, we considerMiller's principle, a well-known principle relating higher-order probabilities to lower-order probabilities, and show that in a precise sense KD45 characterizes certainty in those structures satisfying Miller's principle.  相似文献   

17.
Our long-term goal is the development of a general framework for specifying, structuring, and interoperating provers. Our main focus is on the formalization of the architectural and implementational choices that underlie the construction of such systems. This paper has two main goals. The first is to introduce the main intuitions underlying the proposed framework. We concentrate on its use in the integration of provers. The second is the development of the notion of reasoning theory, meant as the formalization of the notion of implementation of the logic of a prover. As an example we sketch an analysis, at the reasoning theory level, of the integration of linear arithmetic into the NQTHM simplification process.  相似文献   

18.
N. Young 《Algorithmica》1994,11(6):525-541
Weighted caching is a generalization ofpaging in which the cost to evict an item depends on the item. We study both of these problems as restrictions of the well-knownk-server problem, which involves moving servers in a graph in response to requests so as to minimize the distance traveled.We give a deterministic on-line strategy for weighted caching that, on any sequence of requests, given a cache holdingk items, incurs a cost within a factor ofk/(k–h+1) of the minimum cost possible given a cache holdingh items. The strategy generalizes least recently used, one of the best paging strategies in practice. The analysis is a primal-dual analysis, the first for an on-line problem, exploiting the linear programming structure of thek-server problem.We introduceloose competitiveness, motivated by Sleator and Tarjan's complaint [ST] that the standard competitive ratios for paging strategies are too high. Ak-server strategy isloosely c(k)-competitive if, for any sequence, foralmost all k, the cost incurred by the strategy withk serverseither is no more thanc(k) times the minimum costor is insignificant.We show that certain paging strategies (including least recently used, and first in first out) that arek-competitive in the standard model are looselyc(k)-competitive providedc(k)/Ink and bothk/c(k) andc(k) are nondecreasing. We show that the marking algorithm, a randomized paging strategy that is (Ink)-competitive in the standard model, is looselyc(k)-competitive providedk–2 In Ink and both 2 Ink–c(k) andc(k) are nondecreasing.This paper is the journal version of On-line Caching as Cache Size Varies, which appeared in theProceedings of the 2nd Annual ACM-SIAM Symposium on Discrete Algorithms (1991). Details beyond those in this paper may be found in Competitive Paging and Dual-Guided Algorithms for Weighted Caching and Matching, which is the author's thesis and is available as Technical Report CS-TR-348-91 from the Computer Science Department at Princeton University.This research was performed while the author was at the Computer Science Department, Princeton University, Princeton, NJ 08544, USA, and was supported by the Hertz Foundation.  相似文献   

19.
For over a decade, researchers in formal methods have tried to create formalisms that permit natural specification of systems and allow mathematical reasoning about their correctness. The availability of fully automated reasoning tools enables non-experts to use formal methods effectively—their responsibility reduces to specifying the model and expressing the desired properties. Thus, it is essential that these properties be represented in a language that is easy to use, sufficiently expressive and succinct. Linear-time temporal logic (LTL) is a formalism that has been used extensively by researchers for program specification and verification. One of the desired properties of LTL formulas is closure under stuttering. That is, we do not want the interpretation of formulas to change over traces where some states are repeated. This property is important from both practical and theoretical prospectives; all properties which are closed under stuttering can be expressed in LTL–X—a fragment of LTL without the next operator. However, it is often difficult to express properties in this fragment of LTL. Further, determining whether a given LTL property is closed under stuttering is PSPACE-complete. In this paper, we introduce a notion of edges of LTL formulas and present a formal theory of closure under stuttering. Edges allow natural modelling of systems with events. Our theory enables syntactic reasoning about whether the resulting properties are closed under stuttering. Finally, we apply the theory to the pattern-based approach of specifying temporal formulas.  相似文献   

20.
General Convergence Results for Linear Discriminant Updates   总被引:1,自引:0,他引:1  
The problem of learning linear-discriminant concepts can be solved by various mistake-driven update procedures, including the Winnow family of algorithms and the well-known Perceptron algorithm. In this paper we define the general class of quasi-additive algorithms, which includes Perceptron and Winnow as special cases. We give a single proof of convergence that covers a broad subset of algorithms in this class, including both Perceptron and Winnow, but also many new algorithms. Our proof hinges on analyzing a generic measure of progress construction that gives insight as to when and how such algorithms converge.Our measure of progress construction also permits us to obtain good mistake bounds for individual algorithms. We apply our unified analysis to new algorithms as well as existing algorithms. When applied to known algorithms, our method automatically produces close variants of existing proofs (recovering similar bounds)—thus showing that, in a certain sense, these seemingly diverse results are fundamentally isomorphic. However, we also demonstrate that the unifying principles are more broadly applicable, and analyze a new class of algorithms that smoothly interpolate between the additive-update behavior of Perceptron and the multiplicative-update behavior of Winnow.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号