首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Model transformation is an approach that, among other advantages, enables the reuse of existing analysis and implementation techniques, languages and tools. The area of formal verification makes wide use of model transformation because the cost of constructing efficient model checkers is extremely high. There are various examples of translations from specification and programming languages to the input languages of prominent model checking tools, like SPIN. However, this approach provides a safe analysis method only if there is a guarantee that the transformation process preserves the semantics of the original specification/program, that is, that the transformation is correct. Depending on the source and/or target languages, this notion of correctness is not easy to achieve. In this paper, we tackle this problem in the context of Object-Based Graph Grammars (OBGG). OBGG is a formal language suitable for the specification of distributed systems, with a variety of tools and techniques centered around the transformation of OBGG models. We describe in details the model transformation from OBGG models to PROMELA, the input language of the SPIN model checker. Amongst the contributions of this paper are: (a) the correctness proof of the transformation from OBGG models to PROMELA; (b) a generalization of this process in steps that may be used as a guide to prove the correctness of transformations from different specification/programming languages to PROMELA.  相似文献   

2.
This paper treats the modeling of an important class of databases, i.e. geographical databases, with emphasis on both structural (data definition) and behavioral (data manipulation) aspects. Geometric objects such as polygons, line segments, and points may have different relations among each other (such as order, adjacency, connectivity) and can be represented in a uniform spatial data structure (structure graph). The dynamic behavior is defined by a finite set of consistency-preserving state transitions (productions) where coincidence problems as well as topological properties have to be solved. Moreover, the graph grammar approach can be used to study the synchronization of several concurrent productions (Church-Rosser properties) and offers a framework for implementing a geographical database.  相似文献   

3.
The categorical approach is proposed to the formalization of fuzzy graph grammars obtained as a result of generalization of sequential graph grammars. This approach takes into consideration the basic types of fuzziness that arise in constructing categories of fuzzy objects and describing transformations of fuzzy graphs generated by fuzzy sets. All the problems of undecidability that are well known for Chomsky grammars are proved to hold true for fuzzy graph grammars. __________ Translated from Kibernetika i Sistemnyi Analiz, No. 4, pp. 130–144, July–August 2006.  相似文献   

4.
Visual languages (VLs) facilitate software development by not only supporting communication and abstraction, but also by generating various artifacts such as code and reports from the same high-level specification. VLs are thus often translated to other formalisms, in most cases with bidirectionality as a crucial requirement to, e.g., support re-engineering of software systems. Triple Graph Grammars (TGGs) are a rule-based language to specify consistency relations between two (visual) languages from which bidirectional translators are automatically derived. TGGs are formally founded but are also limited in expressiveness, i.e., not all types of consistency can be specified with TGGs. In particular, 1-to-n correspondence between elements depending on concrete input models cannot be described. In other words, a universal quantifier over certain parts of a TGG rule is missing to generalize consistency to arbitrary size. To overcome this, we transfer the well-known multi-amalgamation concept from algebraic graph transformation to TGGs, allowing us to mark certain parts of rules as repeated depending on the translation context. Our main contribution is to derive TGG-based translators that comply with this extension. Furthermore, we identify bad smells on the usage of multi-amalgamation in TGGs, prove that multi-amalgamation increases the expressiveness of TGGs, and evaluate our tool support.  相似文献   

5.
6.
7.
8.

Model synchronization, i.e., the task of restoring consistency between two interrelated models after a model change, is a challenging task. Triple graph grammars (TGGs) specify model consistency by means of rules that describe how to create consistent pairs of models. These rules can be used to automatically derive further rules, which describe how to propagate changes from one model to the other or how to change one model in such a way that propagation is guaranteed to be possible. Restricting model synchronization to these derived rules, however, may lead to unnecessary deletion and recreation of model elements during change propagation. This is inefficient and may cause unnecessary information loss, i.e., when deleted elements contain information that is not represented in the second model, this information cannot be recovered easily. Short-cut rules have recently been developed to avoid unnecessary information loss by reusing existing model elements. In this paper, we show how to automatically derive (short-cut) repair rules from short-cut rules to propagate changes such that information loss is avoided and model synchronization is accelerated. The key ingredients of our rule-based model synchronization process are these repair rules and an incremental pattern matcher informing about suitable applications of them. We prove the termination and the correctness of this synchronization process and discuss its completeness. As a proof of concept, we have implemented this synchronization process in eMoflon, a state-of-the-art model transformation tool with inherent support of bidirectionality. Our evaluation shows that repair processes based on (short-cut) repair rules have considerably decreased information loss and improved performance compared to former model synchronization processes based on TGGs.

  相似文献   

9.
Graph grammar concepts are applied for the design of consistent states and manipulations modeling actions, transactions and schedules for simultaneous executions on data base systems. Especially we show how to use Church-Rosser Theorems for graph grammars to handle synchronization problems for data base systems in a multi-user environment. In this paper theses concepts are applied to a data base system for a library. For reasons of presentation the system is restricted to the basic features of a library but can be extended to more comfortable systems including also generalized data base systems. All manipulation rules are shown to preserve consistency and it is analysed exactly which of theses rules are collateral and for which of them further synchronization mechanisms are necessary. While consistent states are modelled as graphs and manipulation rules (resp. actions) as graph productions transactions correspond to sequences of productions, and schedules to sequences of parallel productions. The application of a schedule to a consistent state yields a “semantical schedule” which transforms the given state to some other state of the system. Sufficient conditions are given for a schedule to be consistent which means that consistent states are transformed into consistent states. Moreover we are able to define a degree of non-parallelism for schedules such that optimal schedules are those with minimal degree with respect to all equivalent ones. Applying results from graph grammar theory we are able to show that for each schedule there is a unique optimal one. A similar result is shown for semantical schedules where the semantical degree of non-parallelism is smaller than the syntactical one in general.  相似文献   

10.
TopoLayout: multilevel graph layout by topological features   总被引:2,自引:0,他引:2  
We describe TopoLayout, a feature-based, multilevel algorithm that draws undirected graphs based on the topological features they contain. Topological features are detected recursively inside the graph, and their subgraphs are collapsed into single nodes, forming a graph hierarchy. Each feature is drawn with an algorithm tuned for its topology. As would be expected from a feature-based approach, the runtime and visual quality of TopoLayout depends on the number and types of topological features present in the graph. We show experimental results comparing speed and visual quality for TopoLayout against four other multilevel algorithms on a variety of data sets with a range of connectivities and sizes. TopoLayout frequently improves the results in terms of speed and visual quality on these data sets  相似文献   

11.
In this paper we provide an implementation strategy to map a functional specification of an utterance into a syntactically well-formed sentence. We do this by integrating the functional and the syntactic perspectives on language, which we take to be exemplified by systemic grammars and tree adjoining grammars (TAGs) respectively. From systemic grammars we borrow the use of networks of choices to classify the set of possible constructions. The choices expressed in an input are mapped by our generator to a syntactic structure as defined by a TAG. We argue that the TAG structures can be appropriate structural units of realization in an implementation of a generator based on systemic grammar and also that a systemic grammar provides an effective means of deciding between various syntactic possibilities expressed in a TAG grammar. We have developed a generation strategy which takes advantage of what both paradigms offer to generation, without compromising either.  相似文献   

12.
13.
We present a new full cube computation technique and a cube storage representation approach, called the multidimensional cyclic graph (MCG) approach. The data cube relational operator has exponential complexity and therefore its materialization involves both a huge amount of memory and a substantial amount of time. Reducing the size of data cubes, without a loss of generality, thus becomes a fundamental problem. Previous approaches, such as Dwarf, Star and MDAG, have substantially reduced the cube size using graph representations. In general, they eliminate prefix redundancy and some suffix redundancy from a data cube. The MCG differs significantly from previous approaches as it completely eliminates prefix and suffix redundancies from a data cube. A data cube can be viewed as a set of sub-graphs. In general, redundant sub-graphs are quite common in a data cube, but eliminating them is a hard problem. Dwarf, Star and MDAG approaches only eliminate some specific common sub-graphs. The MCG approach efficiently eliminates all common sub-graphs from the entire cube, based on an exact sub-graph matching solution. We propose a matching function to guarantee one-to-one mapping between sub-graphs. The function is computed incrementally, in a top-down fashion, and its computation uses a minimal amount of information to generate unique results. In addition, it is computed for any measurement type: distributive, algebraic or holistic. MCG performance analysis demonstrates that MCG is 20-40% faster than Dwarf, Star and MDAG approaches when computing sparse data cubes. Dense data cubes have a small number of aggregations, so there is not enough room for runtime and memory consumption optimization, therefore the MCG approach is not useful in computing such dense cubes. The compact representation of sparse data cubes enables the MCG approach to reduce memory consumption by 70-90% when compared to the original Star approach, proposed in [33]. In the same scenarios, the improved Star approach, proposed in [34], reduces memory consumption by only 10-30%, Dwarf by 30-50% and MDAG by 40-60%, when compared to the original Star approach. The MCG is the first approach that uses an exact sub-graph matching function to reduce cube size, avoiding unnecessary aggregation, i.e. improving cube computation runtime.  相似文献   

14.
In this work we present LSEGO, an approach to drive efficient global optimization (EGO), based on LS (least squares) ensemble of metamodels. By means of LS ensemble of metamodels it is possible to estimate the uncertainty of the prediction with any kind of model (not only kriging) and provide an estimate for the expected improvement function. For the problems studied, the proposed LSEGO algorithm has shown to be able to find the global optimum with less number of optimization cycles than required by the classical EGO approach. As more infill points are added per cycle, the faster is the convergence to the global optimum (exploitation) and also the quality improvement of the metamodel in the design space (exploration), specially as the number of variables increases, when the standard single point EGO can be quite slow to reach the optimum. LSEGO has shown to be a feasible option to drive EGO with ensemble of metamodels as well as for constrained problems, and it is not restricted to kriging and to a single infill point per optimization cycle.  相似文献   

15.
16.
Formal language techniques may be used to study controlled dynamical systems and lead to a formulation in terms of context-dependent grammars. Using automata as basic units we obtain an on-line implementation of the grammars. The implementation has a hierarchical structure, the lowest level dealing with the identification of alphabet symbols (terminal and non-terminal), and the other with the establishment of the grammar productions. The method provides an automatic construction of the automata network from the experimental data. In particular, the parallel-processing capabilities of automata networks improves the performance of grammar-based models for control and anomaly detection in electromechanical devices of industrial relevance. One such example is presented.  相似文献   

17.
18.
Many problems can be cast as statistical inference on an attributed random graph. Our motivation is change detection in communication graphs. We prove that tests based on a fusion of graph-derived and content-derived metadata can be more powerful than those based on graph or content features alone. For some basic attributed random graph models, we derive fusion tests from the likelihood ratio. We describe the regions in parameter space where the fusion improves power, using both numeric results from selected small examples and analytic results on asymptotically large graphs.  相似文献   

19.
A crucial issue in hypertext design is how to give the reader new capabilities without taking any existing ones away. This paper discusses the conversion of traditional, printed scholarly text to hypertext, focusing on the presentation of the explicit links between texts that are represented by direct quotations. It first examines how quotations are used by the author and the reader in traditional text. It then considers the conversion of scholarly text to hypertext, concentrating on how direct quotations should be handled. Three specific areas are examined: (i) what kinds of links are necessary, and what they should link together, (ii) how the linked texts should be divided into sections or nodes, and (iii) how the links and nodes should be displayed to the reader. The paper concludes by listing some recommendations for the conversion of scholarly text to hypertext.Stephanie W. Haas is an Assistant Professor in the School of Information and Library Science at the University of North Carolina at Chapel Hill. She is interested in natural language processing and information retrieval. Her most recent publications concern the vocabulary and structure of sublanguages.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号