首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The temporal property to-always has been proposed for specifying progress properties of concurrent programs. Although the to-always properties are a subset of the leads-to properties for a given program, to-always has more convenient proof rules and in some cases more accurately describes the desired system behavior. In this paper, we give a predicate transformerwta, derive some of its properties, and use it to define to-always. Proof rules for to-always are derived from the properties ofwta. We conclude by briefly describing two application areas, nondeterministic data flow networks and self-stabilizing systems where to-always properties are useful.  相似文献   

2.
Dr. T. Ström 《Computing》1972,10(1-2):1-7
It is a commonly occurring problem to find good norms · or logarithmic norms (·) for a given matrix in the sense that they should be close to respectively the spectral radius (A) and the spectral abscissa (A). Examples may be the certification thatA is convergent, i.e. (A)A<1 or stable, i.e. (A)(A)<0. Often the ordinary norms do not suffice and one would like to try simple modifications of them such as using an ordinary norm for a diagonally transformed matrix. This paper treats this problem for some of the ordinary norms.
Minimisierung von Normen und Logarithmischen Normen durch Diagonale Transformationen
Zusammenfassung Ein oft vorkommendes praktisches Problem ist die Konstruktion von guten Normen · und logarithmischen Normen (·) für eine gegebene MatrixA. Mit gut wird dann verstanden, daß A den Spektralradius (A)=max |1| und (A) die Spektralabszisse (A)=max Re i gut approximieren. Beispiele findet man für konvergente Matrizen wo (A)A<1 gewünscht ist, und für stabile Matrizen wo (A)(A)<0 zu zeigen ist. Wir untersuchen hier, wie weit man mit Diagonaltransformationen und dengewöhnlichsten Normen kommen kann.
  相似文献   

3.
There has been substantial research on various aspects of peoples usage of physical libraries but relatively little on their interaction with individual library artefacts; that is: books, journals, and papers. We have studied peoples behaviour when working in physical libraries, focusing particularly on how they interact with these artefacts, how they evaluate them, and how they interact with librarians. This study provides a better understanding of how people interact with paper information, from which we can draw implications for some requirements of the design of digital libraries, while recognising that the term library is a metaphor when applied to electronic document collections. In particular, improved communication with other library users and with librarians could facilitate more rapid access to relevant information and support services, and structuring information presentation so that users can make rapid assessments of its relevance would improve the efficiency of many information searches.  相似文献   

4.
Learning to Play Chess Using Temporal Differences   总被引:4,自引:0,他引:4  
Baxter  Jonathan  Tridgell  Andrew  Weaver  Lex 《Machine Learning》2000,40(3):243-263
In this paper we present TDLEAF(), a variation on the TD() algorithm that enables it to be used in conjunction with game-tree search. We present some experiments in which our chess program KnightCap used TDLEAF() to learn its evaluation function while playing on Internet chess servers. The main success we report is that KnightCap improved from a 1650 rating to a 2150 rating in just 308 games and 3 days of play. As a reference, a rating of 1650 corresponds to about level B human play (on a scale from E (1000) to A (1800)), while 2150 is human master level. We discuss some of the reasons for this success, principle among them being the use of on-line, rather than self-play. We also investigate whether TDLEAF() can yield better results in the domain of backgammon, where TD() has previously yielded striking success.  相似文献   

5.
This paper describes a unified variational theory for design sensitivity analysis of nonlinear dynamic response of structural and mechanical systems for shape, nonshape, material and mechanical properties selection, as well as control problems. The concept of an adjoint system, the principle of virtual work and a Lagrangian-Eulerian formulation to describe the deformations and the design variations are used to develop a unified view point. A general formula for design sensitivity analysis is derived and interpreted for usual performance functionals. Analytical examples are utilized to demonstrate the use of the theory and give insights for application to more complex problems that must be treated numerically.Derivatives The comma notation for partial derivatives is used, i.e. G,u = G/u. An upper dot represents material time derivative, i.e. ü = 2u/t2. A prime implies derivative with respect to the time measured in the reference time-domain, i.e. u = du/d.  相似文献   

6.
Semantics connected to some information based metaphor are well-known in logic literature: a paradigmatic example is Kripke semantic for Intuitionistic Logic. In this paper we start from the concrete problem of providing suitable logic-algebraic models for the calculus of attribute dependencies in Formal Contexts with information gaps and we obtain an intuitive model based on the notion of passage of information showing that Kleene algebras, semi-simple Nelson algebras, three-valued ukasiewicz algebras and Post algebras of order three are, in a sense, naturally and directly connected to partially defined information systems. In this way wecan provide for these logic-algebraic structures a raison dêetre different from the original motivations concerning, for instance, computability theory.  相似文献   

7.
Adam Drozdek 《AI & Society》1998,12(4):315-321
The Turing Test (TT) is criticised for various reasons, one being that it is limited to testing only human-like intelligence. We can read, for example, that TT is testing humanity, not intelligence, (Fostel, 1993), that TT is a test for human intelligence, not intelligence in general, (French, 1990), or that a perspective assumed by TT is parochial, arrogant and, generally, massively anthropocentric (Hayes and Ford, 1996). This limitation presumably causes a basic inadequacy of TT, namely that it misses a wide range of intelligence by focusing on one possibility only, namely on human intelligence. The spirit of TT enforces making explanations of possible machine intelligence in terms of what is known about intelligence in humans, thus possible specificity of the computer intelligence is ruled out from the oælset.This approach causes ire in some interpreters of the test and leads them to desire to create a theory of intelligence in general, thereby overcoming the limitations imposed by merely human intelligence. At times it is an emotion-laden discussion that does not hesitate to impute chauvinism in those limiting themselves to human-type intelligence.1 This discussion is, by the way, not unlike the rhetoric used by some defenders of animal rights, who insist that an expression of superiority of men over animals is a token of speciesism, and speciesism is just a moral mistake of the same sort as racism and sexism.  相似文献   

8.
Experiment 1 explored the impact of physically touching a virtual object on how realistic the virtual environment (VE) seemed to the user. Subjects in a no touch group picked up a 3D virtual image of a kitchen plate in a VE, using a traditional 3D wand. See and touch subjects physically picked up a virtual plate possessing solidity and weight, using a technique called tactile augmentation. Afterwards, subjects made predictions about the properties of other virtual objects they saw but did not interact with in the VE. See and touch subjects predicted these objects would be more solid, heavier, and more likely to obey gravity than the no touch group. In Experiment 2 (a pilot study), subjects physically bit a chocolate bar in one condition, and imagined biting a chocolate bar in another condition. Subjects rated the event more fun and realistic when allowed to physically bite the chocolate bar. Results of the two experiments converge with a growing literature showing the value of adding physical qualities to virtual objects. This study is the first to empirically demonstrate the effectiveness of tactile augmentation as a simple, safe, inexpensive technique with large freedom of motion for adding physical texture, force feedback cues, smell and taste to virtual objects. Examples of practical applications are discussed.Based in part on Physically touching virtual objects using tactile augmentation enhances the realism of virtual environments' by Hunter Hoffman which appeared in the Proceedings of the IEEE Virtual Reality Annual International Symposium '98, Atlanta GA, pp 59–63. ¢ 1998 IEEE.  相似文献   

9.
Exact upper bounds are obtained for the probability F() - F(u), 0 < u < < , on the set of distribution functions F(x) of nonnegative random variables with unimodal density with an arbitrary mode m 0 and one or two fixed first moments.Translated from Kibernetika i Sistemnyi Analiz, No. 5, pp. 72–83, September–October 2004.  相似文献   

10.
This paper addresses the use of artefacts as a powerful resource for analysis, focusing on the artefact as designed as a means of eliciting the designers explicit and implicit knowledge and artefacts as used as a means of uncovering the trail left by currently inactive processes. Artefact analysis is particularly suitable in situations where direct observation is ineffective, especially in activities that occur infrequently. We demonstrate the usefulness of our technique through the analysis of artefacts within both the office and the meeting environment. This is part of a wider study aimed at understanding the nature of decisions in meetings with the view of producing a tool to aid decision management and hence reduce rework. We conclude by drawing out some general lessons from our analysis, which reaffirms the intricate role that artefacts play in maintaining activity dynamics.
Alan DixEmail:
  相似文献   

11.
Agents in a competitive interaction can greatly benefit from adapting to a particular adversary, rather than using the same general strategy against all opponents. One method of such adaptation isOpponent Modeling, in which a model of an opponent is acquired and utilized as part of the agents decision procedure in future interactions with this opponent. However, acquiring an accurate model of a complex opponent strategy may be computationally infeasible. In addition, if the learned model is not accurate, then using it to predict the opponents actions may potentially harm the agents strategy rather than improving it. We thus define the concept ofopponent weakness, and present a method for learning a model of this simpler concept. We analyze examples of past behavior of an opponent in a particular domain, judging its actions using a trusted judge. We then infer aweakness model based on the opponents actions relative to the domain state, and incorporate this model into our agents decision procedure. We also make use of a similar self-weakness model, allowing the agent to prefer states in which the opponent is weak and our agent strong; where we have arelative advantage over the opponent. Experimental results spanning two different test domains demonstrate the agents improved performance when making use of the weakness models.  相似文献   

12.
The theorem of Dimensional Analysis, usually applied to the inference of physical laws, is for the first time applied to the derivation of interpolation curves of numerical data, leading to a simplified dependence on a reduced number of arguments , dimensionless combination of variables. In particular, Monte Carlo modelling of electron beam lithography is considered and the backscattering coefficient addressed, in case of a general substrate layer, in the elastic regime and in the energy range 5 to 100 keV. The many variables involved (electron energy, substrate physical constants and thickness) are demonstrated to ultimately enter in determining through asingle dimensionless parameter 0. Thus, a scaling law is determined, an important guide in microsystem designing, indicating, if any part of the configuration is modified, how the other parameters should change (or scale) without affecting the result. Finally, a simple law =83 0 is shown to account for all variations of the parameters over all substrates of the periodic table.  相似文献   

13.
This paper presents generated enhancements for robust two and three-quarter dimensional meshing, including: (1) automated interval assignment by integer programming for submapped surfaces and volumes, (2) surface submapping, and (3) volume submapping. An introduction to the simplex method, an optimization technique of integer programming, is presented. Simplification of complex geometry is required for the formulation of the integer programming problem. A method of i-j unfolding is defined which explains how irregular geometry can be realigned into a simplified form that is suitable for submap interval assignment solutions. Also presented is the processes by which submapping eliminates the decomposition of surface geometry, through a pseudodecomposition process, producing suitable mapped meshes. The process of submapping involves the creation of interpolated virtual edges, user defined vertex types and i-j-k space traversals. The creation of interpolated virtual edges is the method by which submapping automatically subdivides surface geometry. The interpolated virtual edge is formulated according to an interpolation scheme using the node discretization of curves on the surface. User defined vertex types allow direct user control of surface decomposition and interval assignment by modifying i-j-k space traversals. Volume submapping takes the geometry decomposition to a higher level by using mapped virtual surfaces to eliminate decomposition of complex volumes.  相似文献   

14.
In a model for a measure of computational complexity, , for a partial recursive functiont, letR t denote all partial recursive functions having the same domain ast and computable within timet. Let = {R t |t is recursive} and let = { |i is actually the running time function of a computation}. and are partially ordered under set-theoretic inclusion. These partial orderings have been extensively investigated by Borodin, Constable and Hopcroft in [3]. In this paper we present a simple uniform proof of some of their results. For example, we give a procedure for easily calculating a model of computational complexity for which is not dense while is dense. In our opinion, our technique is so transparent that it indicates that certain questions of density are not intrinsically interesting for general abstract measures of computational complexity, . (This is not to say that similar questions are necessarily uninteresting for specific models.)Supported by NSF Research Grants GP6120 and GJ27127.  相似文献   

15.
In paper Robustness of Adaptive Control of Robots by Ghorbel and Spong, the adaptive control of flexible joint robots is investigated using a singular perturbation approach and composite Lyapunov theory. In this note, we correct a missing term from the boundary-layer system of that paper and then show that the stability analysis presented remains valid.  相似文献   

16.
Within AI and the cognitively related disciplines, there exist a multiplicity of uses of belief. On the face of it, these differing uses reflect differing views about the nature of an objective phenomenon called belief. In this paper I distinguish six distinct ways in which belief is used in AI. I shall argue that not all these uses reflect a difference of opinion about an objective feature of reality. Rather, in some cases, the differing uses reflect differing concerns with special AI applications. In other cases, however, genuine differences exist about the nature of what we pre-theoretically call belief. To an extent the multiplicity of opinions about, and uses of belief, echoes the discrepant motivations of AI researchers. The relevance of this discussion for cognitive scientists and philosophers arises from the fact that (a) many regard theoretical research within AI as a branch of cognitive science, and (b) even if theoretical AI is not cognitive science, trends within AI influence theories developed within cognitive science. It should be beneficial, therefore, to unravel the distinct uses and motivations surrounding belief, in order to discover which usages merely reflect differing pragmatic concerns, and which usages genuinely reflect divergent views about reality.  相似文献   

17.
We introduce a calculus which is a direct extension of both the and the calculi. We give a simple type system for it, that encompasses both Curry's type inference for the -calculus, and Milner's sorting for the -calculus as particular cases of typing. We observe that the various continuation passing style transformations for -terms, written in our calculus, actually correspond to encodings already given by Milner and others for evaluation strategies of -terms into the -calculus. Furthermore, the associated sortings correspond to well-known double negation translations on types. Finally we provide an adequate CPS transform from our calculus to the -calculus. This shows that the latter may be regarded as an assembly language, while our calculus seems to provide a better programming notation for higher-order concurrency. We conclude by discussing some alternative design decisions.  相似文献   

18.
We describe the specification, implementation and proof of correctness of a code generator for a subset of Gypsy 2.05. The code generator is specified in the Boyer-Moore logic; its proof is fully machine-checked using the Kaufmann-enhanced Boyer-Moore theorem prover. Our code generator sits atop a stack of verified system components providing a prototype development environment for constructing highly reliable application Programs.  相似文献   

19.
Tetsuji Iseda 《AI & Society》1999,13(1-2):156-163
This is a programmatic proposal about a better use of the notion ofscientific rationality in the sociology of scientific knowledge (SSK). Strangely enough, some relativistic authors in the SSK literature allow room for scientific rationality in their analyses of scientific practice. I interpret their arguments as follows: since science is essentially a collective activity, any rationality developed and sustained in science should have someinstitutional basis analysable in sociological terms. I advocate that sociologists should explain such scientific rationality, especially the asymmetry between science and nonscience, in sociological terms. In some sense, my programme is even stronger than Bloor's strong programme.  相似文献   

20.
Given (1) Wittgensteins externalist analysis of the distinction between following a rule and behaving in accordance with a rule, (2) prima facie connections between rule-following and psychological capacities, and (3) pragmatic issues about training, it follows that most, even all, future artificially intelligent computers and robots will not use language, possess concepts, or reason. This argument suggests that AIs traditional aim of building machines with minds, exemplified in current work on cognitive robotics, is in need of substantial revision.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号