首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 23 毫秒
1.
Dynamic planning approach to automated web service composition   总被引:2,自引:1,他引:2  
In this paper, novel ideas are presented for solving the automated web service composition problem. Some of the possible real world problems such as partial observability of the environment, nondeterministic effects of web services and service execution failures are solved through a dynamic planning approach. The proposed approach is based on a novel AI planner that is designed for working in highly dynamic environments under time constraints, namely Simplanner. World altering service calls are done according to the WS-Coordination and WS-Business Activity web service transaction specifications in order to physically recover from failure situations and prevent the undesired side effects of the aborted web service composition efforts.  相似文献   

2.
The uncertainties of planning engendered by nondeterminism and partial observability have led to a melding of model checking and artificial intelligence. The result is planning as model checking. Because planning as model checking tests sets of states and sets of transitions at once, rather than single states, the method remains robust and viable in domains of large state spaces and varying levels of uncertainty.We develop a test bench for Semantic Web agents and use model-based planning to derive strong plans, strong cyclic plans, and weak plans. Our results suggest potential robustness and efficacy in devising plans for agent actions in the Semantic Web environment.  相似文献   

3.
Model checking of real-time systems against Duration Calculus (DC) specifications requires the translation of DC formulae into automata-based semantics. The existing algorithms provide a limited DC coverage and do not support compositional verification. We propose a translation algorithm that advances the applicability of model checking tools to realistic applications. Our algorithm significantly extends the subset of DC that can be checked automatically. The central part of the algorithm is the automatic decomposition of DC specifications into sub-properties that can be verified independently. The decomposition is based on a novel distributive law for DC. We implemented the algorithm in a tool chain for the automated verification of systems comprising data, communication, and real-time aspects. We applied the tool chain to verify safety properties in an industrial case study from the European Train Control System (ETCS).  相似文献   

4.
A hybrid-graph approach for automated setup planning in CAPP   总被引:1,自引:0,他引:1  
In this paper, a systematic approach for automated setup planning in CAPP is introduced. The concept of “hybrid graph”, which can be transferred into directed graph by changing any two-way edge into one-way edge, is introduced. The specified graph theory is effectively used in setup planning. Tolerance relations are used as critical constraints for setup planning. Comprehensive principles are explored and summarized for setup planning. The hybrid-graph theory, accompanied by matrix theory, is used to aid computerizing these principles. An example is illustrated to demonstrate the algorithm.  相似文献   

5.
Program checking is now a mature technology, but is not yet used on a large scale. We identify one cause of this gap in the decoupling of checking tools from the everyday development tools. To radically change the situation, we explore the integration of simple user-defined checks into the core of every development process: the compiler. The checks we implement express constrained reachability queries in the control flow graph taking the form “from x to y avoiding z”, where x, y, and z are native code patterns containing a blend of syntactic, semantic and dataflow information. Compiler integration enables continuous checking throughout development, but also a pervasive propagation of checking technology. This integration poses some interesting challenges, including tight bounds on the acceptable overhead, but in turn opens up new perspectives. Factorizing analyses between checking and compiling improves both the efficiency and the expressiveness of the checks.  相似文献   

6.
季磊 《计算机工程与设计》2007,28(11):2658-2661,2670
基于模型检验的规划是当今通用的规划研究的热点,其求解效率比较高.详细阐述了基于模型检验的规划的发展与研究现状.介绍了基于模型检验的规划的基本框架,分别阐述了模型检验技术在规划领域的重要应用,并介绍了两种典型的基于模型检验的规划工具,分析了今后的发展趋势.  相似文献   

7.
Highly turbulent environment of dynamic job-shop operations affects shop floor layout as well as manufacturing operations. Due to the dynamic nature of layout changes, essential requirements such as adaptability and responsiveness to the changes need to be considered in addition to the cost issues of material handling and machine relocation when reconfiguring a shop floor’s layout. Here, based on the source of uncertainty, the shop floor layout problem is split into two sub-problems and dealt with by two modules: re-layout and find-route. GA is used where changes cause the entire shop re-layout, while function blocks are utilised to find the best sequence of robots for the new conditions within the existing layout. This paper reports the latest development to the authors’ previous work.  相似文献   

8.
9.
This paper proposes a practical job grouping approach, which aims to enhance the time related performance metrics of container transfers in the Patrick AutoStrad container terminal, located in Brisbane, Australia. It first formulates a mathematical model of the automated container transfers in a relatively complex environment. Apart from the consideration on collision avoidance of a fleet of large vehicles in a confined area, it also deals with many other difficult practical challenges such as the presence of multiple levels of container stacking and sequencing, variable container orientations, and vehicular dynamics that require finite acceleration and deceleration times. The proposed job grouping approach aims to improve the makespan of the schedule for yard jobs, while reducing straddle carrier waiting time by grouping jobs using a guiding function. The performance of the current sequential job allocation method and the proposed job grouping approach are evaluated and compared statistically using a pooled t-test for 30 randomly generated yard configurations. The experimental results show that the job grouping approach can effectively improve the schedule makespan and reduce the total straddle carrier waiting time.  相似文献   

10.
Model checking transactional memories (TMs) is difficult because of the unbounded number, length, and delay of concurrent transactions, as well as the unbounded size of the memory. We show that, under certain conditions satisfied by most TMs we know of, the model checking problem can be reduced to a finite-state problem, and we illustrate the use of the method by proving the correctness of several TMs, including two-phase locking, DSTM, and TL2. The safety properties we consider include strict serializability and opacity; the liveness properties include obstruction freedom, livelock freedom, and wait freedom. Our main contribution lies in the structure of the proofs, which are largely automated and not restricted to the TMs mentioned above. In a first step we show that every TM that enjoys certain structural properties either violates a requirement on some program with two threads and two shared variables, or satisfies the requirement on all programs. In the second step, we use a model checker to prove the requirement for the TM applied to a most general program with two threads and two variables. In the safety case, the model checker checks language inclusion between two finite-state transition systems, a nondeterministic transition system representing the given TM applied to a most general program, and a deterministic transition system representing a most liberal safe TM applied to the same program. The given TM transition system is nondeterministic because a TM can be used with different contention managers, which resolve conflicts differently. In the liveness case, the model checker analyzes fairness conditions on the given TM transition system.  相似文献   

11.
Model checkers were originally developed to support the formal verification of high-level design models of distributed system designs. Over the years, they have become unmatched in precision and performance in this domain. Research in model checking has meanwhile moved towards methods that allow us to reason also about implementation level artifacts (e.g., software code) directly, instead of hand-crafted representations of those artifacts. This does not mean that there is no longer a place for the use of high-level models, but it does mean that such models are used in a different way today. In the approach that we describe here, high-level models are used to represent the environment for which the code is to be verified, but not the application itself. The code of the application is now executed as is by the model checker, while using powerful forms of abstraction on-the-fly to build the abstract state space that guides the verification process. This model-driven code checking method allows us to verify implementation level code efficiently for high-level safety and liveness properties. In this paper, we give an overview of the methodology that supports this new paradigm of code verification.  相似文献   

12.
Agent-oriented programming techniques seem appropriate for developing systems that operate in complex, dynamic, and unpredictable environments. We aim to address this requirement by developing model-checking techniques for the (automatic or semiautomatic) verification of rational-agent systems written in a logic-based agent-oriented programming language. Typically, developers apply model-checking techniques to abstract models of a system rather than the system implementation. Although this is important for detecting design errors at an early stage, developers might still introduce errors during coding. In contrast, developers can directly apply our model-checking techniques to systems implemented in an agent-oriented programming language, automatically verifying agent systems without the usual gap between design and implementation. We developed our techniques for AgentSpeak, a rational-agent programming language based on the AgentSpeak (L) abstract agent-oriented programming language. AgentSpeak shares many features of the agent-oriented programming paradigm. Similarly, we've developed techniques for automatically translating AgentSpeak programs into the model specification language of existing model-checking systems. In this way, we reduce the problem of verifying that an AgentSpeak system has certain BDI logic properties to a conventional LTL model-checking problem.  相似文献   

13.
14.
Conclusions (1) Microcomputers may render the same services to the individual museums as mainframes and/or networks. When a sizable amount of data is entered in several museums they may be integrated in loosely coupled systems, that share data as and when required.(2) The museum staff is motivated more readily by working with a small in-house system that they may control themselves than when they have to work with a system, however extended the help and facilities it provides, that is outside their control.(3) Data for an automated system may correspond in format to the traditional ones for manual documentation.(4) The input format should be easily manipulable, both at programming level and from the user's point of view. This user friendliness should provide an efficient use of manpower resources, e.g., by concentrating on a few selected fields, by copying fields with identical entries, etc.(5) If the retrieval program incorporates three essential components—AND/OR/NOT operators, full field control and inverted file searching—a high precision-recall ratio may be expected, even when relatively few fields are used.J. J. Paijmans studied art history at the University of Amsterdam, and is now researching the application of computers in museums at Leiden University while teaching computer science at The Reinwardt Academy, also in Leiden. A. A. Verrijn-Stuart, who studied physics at the University of Amsterdam, was involved in various operations research, computer and planning activities. He is now professor of computer science at the University of Leiden.  相似文献   

15.
In the recent years, the web has kept growing rapidly and undergone tremendous changes towards a user-centric environment. With the proliferation of services available on the Internet, millions of users are able to voluntarily participate in and collaborate for their own interests and benefits by means of service composition. However, due to the ever increasing number of services, it then becomes a challenging issue to enable the users to rapidly select and compose the proper services. In this paper, we pro...  相似文献   

16.
A method for automating the process of system decomposition is described. The method is based on a formal specification scheme, formal definition of good decomposition, heuristic rules governing the search for good candidate decompositions, and a measure of complexity that allows ranking of the candidate decompositions. The decomposition method has been implemented as a set of experimental computerized systems analysis tools and applied to a standard problem for which other designs already exist. The results are encouraging, in that decompositions generated using other methodologies map easily into those suggested by the computerized tools. Additionally, the use of the method indicates that when more than one `good' decomposition is suggested by the system, the specifications might have been incomplete. That is, the computerized tools can identify areas where more information should be sought by analysis  相似文献   

17.
This paper presents an approach to automated mechanism design in the domain of double auctions. We describe a novel parameterized space of double auctions, and then introduce an evolutionary search method that searches this space of parameters. The approach evaluates auction mechanisms using the framework of the TAC Market Design Game and relates the performance of the markets in that game to their constituent parts using reinforcement learning. Experiments show that the strongest mechanisms we found using this approach not only win the Market Design Game against known, strong opponents, but also exhibit desirable economic properties when they run in isolation.  相似文献   

18.
A model checker is described that supports proving logical properties of concurrent systems. The logical properties can be described in different action-based logics (variants of Hennessy-Milner logic). The tools is based on the EMC model checker for the logic CTL. It therefore employs a set of translation functions from the considered logics to CTL, as well as a model translation function from labeled transition systems (models of the action-based logics) to Kripke structures (models for CTL). The obtained tool performs model checking in linear time complexity, and its correctness is guaranteed by the proof that the set of translation functions, coupled with the model translation function, preserves satisfiability of logical formulae.  相似文献   

19.
We present a logic-based verification framework for multilevel security and transactional correctness of service oriented architectures. The framework is targeted at the analysis of data confidentiality, enforced by non-interference, and of service responsiveness, captured by a notion of compliance that implies deadlock and livelock freedom. We isolate a class of modal μ-calculus formulae, interpreted over service configurations, that characterise configurations satisfying the properties of interest. We then investigate an adaptation technique based on the use of coercion filters to block any action that might potentially break security or transactional correctness. Based on the above, we devise a model checking algorithm for adaptive service compositions which automatically synthesises the maximal (most expressive/permissive) filter enforcing the desired security and correctness properties.  相似文献   

20.
We introduce a new deductive approach to planning which is based on Horn clauses. Plans as well as situations are represented as terms and, thus, are first-class objects. We do neither need frame axioms nor state-literals. The only rule of inference is the SLDE-resolution rule, i.e. SLD-resolution, where the traditional unification algorithm has been replaced by anE-unification procedure. We exemplify the properties of our method such as forward and backward reasoning, plan checking, and the integration of general theories. Finally, we present the calculus and show that it is sound and complete. An earlier version of this paper was presented at the German Workshop on Artificial Intelligence, 1989.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号

京公网安备 11010802026262号