共查询到20条相似文献,搜索用时 26 毫秒
1.
Future-generation distributed multimedia applications are expected to be highly scalable to a wide variety of heterogeneous
devices, and highly adaptive across wide-area distributed environments. This demands multiple stages of run-time support in
QoS-aware middleware architectures, particularly, probing the performance of QoS parameters, instantiating the initial component
configurations, and adapting to on-the-fly variations. However, few of the past experiences in related work have shown comprehensive
run-time support in all of the above stages – they often design and build a middleware framework by focusing on only one of
the run-time issues. In this paper, we argue that distributed multimedia applications need effective run-time middleware support
in all these stages to be highly scalable and adaptive across a wide variety of execution environments. Nevertheless, the
design of such a middleware framework should be kept as streamlined and simple as possible, leading to a novel and integrated
run-time middleware platform to unify the probing, instantiation and adaptation stages. In addition, for each stage, the framework
should enable the interaction of peer middleware components across host boundaries, so that the corresponding middleware function
can be performed in a coordinated and coherent fashion. We present the design of such an integrated architecture, with a case
study to illustrate how it is simple yet effective to monitor and configure complex multimedia applications. 相似文献
2.
Fine-grain MPI (FG-MPI) extends the execution model of MPI to allow for interleaved execution of multiple concurrent MPI processes inside an OS-process. It provides a runtime that is integrated into the MPICH2 middleware and uses light-weight coroutines to implement an MPI-aware scheduler. In this paper we describe the FG-MPI runtime system and discuss the main design issues in its implementation. FG-MPI enables expression of function-level parallelism, which along with a runtime scheduler, can be used to simplify MPI programming and achieve performance without adding complexity to the program. As an example, we use FG-MPI to re-structure a typical use of non-blocking communication and show that the integrated scheduler relieves the programmer from scheduling computation and communication inside the application and brings the performance part outside of the program specification into the runtime. 相似文献
3.
Real-time and embedded systems are required to adapt their behavior and structure to runtime unpredicted changes in order to maintain their feasibility and usefulness. These systems are generally more difficult to specify and verify owning to their execution complexity. Hence, ensuring the high-level design and the early verification of system adaptation at runtime is very crucial. However, existing runtime model-based approaches for adaptive real-time and embedded systems suffer from shortcoming linked to efficiently and correctly managing the adaptive system behavior, especially that a formal verification is not allowed by modeling languages such as UML and MARTE profile. Moreover, reasoning about the correctness and the precision of high-level models is a complex task without the appropriate tool support. In this work, we propose an MDE-based framework for the specification and the verification of runtime adaptive real-time and embedded systems. Our approach stands for Event-B method to formally verify resources behavior and real-time constraints. In fact, thanks to MDE M2T transformations, our proposal translates runtime models into Event-B specifications to ensure the correctness of runtime adaptive system properties, temporal constrains and nonfunctional properties using Rodin platform. A flood prediction system case study is adopted for the validation of our proposal. 相似文献
4.
Soft computing techniques proved to be successful in many application areas. In this paper we investigate the application in psychopathological field of two well known soft computing techniques, fuzzy logic and genetic algorithms (GAs). The investigation started from a practical need: the creation of a tool for a quick and correct classification of mental retardation level, which is needed to choose the right treatment for rehabilitation and to assure a quality of life that is suitable for the specific patient condition. In order to meet this need we researched an adaptive data mining technique that allows us to build interpretable models for automatic and reliable diagnosis. Our work concerned a genetic fuzzy system (GFS), which integrates a classical GA and the fuzzy C-means (FCM) algorithm. This GFS, called genetic fuzzy C-means (GFCM), is able to select the best subset of features to generate an efficient classifier for diagnostic purposes from a database of examples. Additionally, thanks to an extension of the FCM algorithm, the proposed technique could also handle databases with missing values. The results obtained in a practical application on a real database of patients and comparisons with established techniques showed the efficiency of the integrated algorithm, both in data mining and completion. 相似文献
5.
( Context:) Cognitive diseases such as Alzheimers affect millions of people around the world. One common characteristic of such diseases is that the patient may assume irrational behaviors, which may result in damage to property or, much worse, in injury to family members or to the patient him/herself.( Objective:) These kinds of behaviors must be monitored via intelligent systems in order to guarantee the safety of the patients.( Problem:) They are characterized by their unpredictable and irrational nature which makes the development of monitoring software complex, this being the main challenge that must be faced.( Proposal:) In order to address this issue, our paper presents a structured approach to address the modeling and development of intelligent Ambient Assisted Living Systems for the monitoring of the behaviors of cognitively impaired people. The main impact of our contribution concerns both the automatic identification of irrational behaviors and a methodology for the design of safe AAL applications specifically targeted at users with cognitive or mental diseases. To prove the feasibility of our approach, we show a use case scenario in which we apply our solution to model a monitoring system able to recognize anomalous situations.( Results:) The preliminary results have shown that the application of the proposed process allows developers to improve the safety of the patient in a domestic environment. 相似文献
6.
Effective runtime service discovery requires identification of services based on different service characteristics such as
structural, behavioural, quality, and contextual characteristics. However, current service registries guarantee services described
in terms of structural and sometimes quality characteristics and, therefore, it is not always possible to assume that services
in them will have all the characteristics required for effective service discovery. In this paper, we describe a monitor-based
runtime service discovery framework called MoRSeD. The framework supports service discovery in both push and pull modes of
query execution. The push mode of query execution is performed in parallel to the execution of a service-based system, in
a proactive way. Both types of queries are specified in a query language called SerDiQueL that allows the representation of
structural, behavioral, quality, and contextual conditions of services to be identified. The framework uses a monitor component
to verify if behavioral and contextual conditions in the queries can be satisfied by services, based on translations of these
conditions into properties represented in event calculus, and verification of the satisfiability of these properties against
services. The monitor is also used to support identification that services participating in a service-based system are unavailable,
and identification of changes in the behavioral and contextual characteristics of the services. A prototype implementation
of the framework has been developed. The framework has been evaluated in terms of comparison of its performance when using
and when not using the monitor component. 相似文献
7.
Financial-services IT systems should feature functional extensibility - an architectural mechanism to extend a system's functional capabilities. The paper describes an architecture and toolset provide the infrastructure to build extensible applications based on a services model. 相似文献
8.
Distributing the workload upon all available Processing Units (PUs) of a high-performance heterogeneous platform (e.g., PCs composed by CPU–GPUs) is a challenging task, since the execution cost of a task on distinct PUs is non-deterministic and affected by parameters not known a priori. This paper presents Sm@rtConfig, a context-aware runtime and tuning system based on a compromise between reducing the execution time of engineering applications and the cost of tasks' scheduling on CPU–GPUs' platforms. Using Model-Driven Engineering and Aspect Oriented Software Development, a high-level specification and implementation for Sm@rtConfig has been created, aiming at improving modularization and reuse in different applications. As case study, the simulation subsystem of a CFD application has been developed using the proposed approach. These system's tasks were designed considering only their functional concerns, whereas scheduling and other non-functional concerns are handled by Sm@rtConfig aspects, improving tasks modularity. Although Sm@rtConfig supports multiple PUs, in this case study, these tasks have been scheduled to execute on an platform composed by one CPU and one GPU. Experimental results show an overall performance gain of 21.77% in comparison to the static assignment of all tasks only to the GPU. 相似文献
9.
The continuing high rate of advances in information and communication systems technology creates many new commercial opportunities but also engenders a range of new technical challenges around maximising systems’ dependability, availability, adaptability, and auditability. These challenges are under active research, with notable progress made in the support for dependable software design and management. Runtime support, however, is still in its infancy and requires further research. This paper focuses on a requirements model for the runtime execution and control of an intention-oriented Cloud-Based Application. Thus, a novel requirements modelling process referred to as Provision, Assurance and Auditing, and an associated framework are defined and developed where a given system’s non/functional requirements are modelled in terms of intentions and encoded in a standard open mark-up language. An autonomic intention-oriented programming model, using the Neptune language, then handles its deployment and execution. 相似文献
10.
This paper deals with a novel buffer management scheme based on evolutionary computing for shared-memory asynchronous transfer mode (ATM) switches. The philosophy behind it is adaptation of the threshold for each logical output queue to the real traffic conditions by means of a system of fuzzy inferences. The optimal fuzzy system is achieved using a systematic methodology, based on genetic algorithms (GAs), which allows the fuzzy system parameters to be derived for each switch size, offering a high degree of scalability to the fuzzy control system. Its performance is comparable to that of the push-out (PO) mechanism, which can be considered ideal from a performance viewpoint, and at any rate much better than that of threshold schemes based on conventional logic. In addition, the fuzzy threshold (FT) scheme is simple and cost-effective when implemented using VLSI technology. 相似文献
11.
Although high-performance computing has always been about efficient application execution, both energy and power consumption have become critical concerns owing to their effect on operating costs and failure rates of large-scale computing platforms. Modern processors provide techniques, such as dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (called throttling), to improve energy efficiency on-the-fly. Without careful application, however, DVFS and throttling may cause a significant performance loss due to system overhead. This paper proposes a novel runtime system that maximizes energy saving by selecting appropriate values for DVFS and throttling in parallel applications. Specifically, the system automatically predicts communication phases in parallel applications and applies frequency scaling considering both the CPU offload, provided by the network-interface card, and the architectural stalls during computation. Experiments, performed on NAS parallel benchmarks as well as on real-world applications in molecular dynamics and linear system solution, demonstrate that the proposed runtime system obtaining energy savings of as much as 14 % with a low performance loss of about 2 %. 相似文献
12.
We present an approach to the literate and structured presentation of formal developments. We discuss the presentation of formal developments in a logical framework and distinguish three aspects: language-related aspects, structural aspects of proofs, and presentational aspects. We illustrate the approach by two examples: a simple mathematical proof of the Knaster-Tarski fixpoint theorem, and a formalization of the VDM development of a revision management system. 相似文献
13.
This paper describes of prototype of a microcomputer implementation of an integrated multicriteria expert support system (MCESS). The system is an interactive, comprehensive, and easy to use tool to support the manager in project selection and resource allocation. The MCESS combines the capabilities of goal programming, the analytic hierarchy process, net present value analysis, and a spreadsheet. The literature on modeling with spreadsheets and on software integration is reviewed. Goal programming, a multicriteria decision making technique is described and the analytic hierarchy process is shown to be able to overcome some of its limitations. The structure of the MCESS is described. An illustration of its use in industrial planning is presented. 相似文献
14.
Since efficient and relatively cheap methods were developed for determining biosequences, a lot of biosequence data has been generated. As the main problem in molecular biology is the analysis of the data instead of the data acquisition, part of the study of computational biology is to extract all kinds of meaningful information from the sequences. Computer-assisted methods have become very important in analyzing biosequence data. However, most of the current computer-assisted methods are limited to finding motifs. Genes can be regulated in many ways, including combinations of regulatory elements. This research is aimed at developing a new integrated system for genome-wide gene expression analysis. This research begins with a new motif-finding method, using a new objective function combining multiple well defined components and an improved stochastic iterative sampling strategy. Combinatorial motif analysis is accomplished by constructive induction that analyzes potential motif combinations. We then apply standard inductive learning algorithms to generate hypotheses for different gene behaviors. A genome-wide gene expression analysis demonstrated the value of this novel integrated system. 相似文献
15.
Real estate appraisal information systems have been studied by many researchers in the past including those systems that have integrated geographic information systems, artificial neural networks, etc. This paper proposes a new integrated approach for real estate appraisals which can be used in real estate appraisal systems to improve efficiency and accuracy. Motivated by the identified limitations of existing cost approaches for real estate appraisals, we integrate some elements from sales comparison approach and income approach into the cost approach to improve the accuracy of the valuation of real estate appropriately. As a result, the new integrated cost-based approach is capable of taking all of the major factors into accounts; these factors are closely related to the assets of real estate in one way or another. In the implementation of the new approach: (1) the concept of replacements cost is revisited and expanded to consider dynamic, environmental, and cultural factors in real estate appraisals; (2) the conventional depreciation values and depreciation rates are replaced by adjustment values and coefficients to include both the positive and negative impact on the changes of real estate value; (3) the theory of technology economics is applied, six forces have been systematically analyzed to determine replacement costs; and finally, (4) different methods for value adjustments, including the algorithm based on artificial neural network, have been utilized to deal with the randomness and uncertainties of mass data for the determination of adjustment values and coefficients. 相似文献
16.
The head pose and movement of a user is closely related with his/her intention and thought, recognition of such information could be useful to develop a natural and sensitive user-wheelchair interface. This paper presents an original integrated approach to a head gesture based interface (HGI) which can perform both identity verification and facial pose estimation. Identity verification is performed by two-factor face authentication which is implemented by the combination of topographic independent component analysis (TICA) and multispace random projection (MRP). Modified synergetic computer with melting (Modified SC-MELT) is introduced to classify facial poses. Motion profile generator (MPG) is thoroughly developed during the integration to convert each estimated facial pose sequence into motion control signal to actuate motor movements. The HGI is intended to be deployed as a user-wheelchair interface for disabled and elderly users in which only users with genuine face and valid token may be granted authorized access and hence pilot an electric powered wheelchair (EPW) using their faces. The integration has been verified under a number of experiments to justify the feasibility and performance of the proposed face-based control strategy. 相似文献
17.
Teleoperated systems for ship hull maintenance (TOS) are robotic systems for ship maintenance tasks, such as cleaning or painting
a ship’s hull. The product line paradigm has recently been applied to TOS, and a TOS reference architecture has thus been
designed. However, TOS requirements specifications have not been developed in any rigorous way with reuse in mind. We therefore
believe that an opportunity exists to increase the abstraction level at which stakeholders can reason about this product line.
This paper reports an experience in which this TOS domain was analyzed, including the lessons learned in the construction
and use of the TOS domain model. The experience is based on the application of extensions of well-known domain analysis techniques,
together with the use of quality attribute templates traced to a feature model to deal with non-functional issues. A qualitative
research method (action research) was used to carry out the experience. 相似文献
18.
Although Decision Support Systems (DSS) have become widespread in recent years for operational control their use in strategic decision-making has only rarely been seen. This study investigates how DSS technology can be applied in the process of strategic planning. The requirements of Strategic Decision Support Systems (SDSS) are discussed and a conceptual frame for the construction of SDSS is developed. The authors emphasize the integration of both the planning instruments and the corresponding data flows. They present the StratConsult system - a PC-based prototype for supporting strategic sessions. Benefits and drawbacks of SDSS are explored and relevant trends for integrated computer-aided strategic DSS are outlined. 相似文献
19.
Business processes are a key aspect of modern organization. In recent years, business process management and optimization has been applied to different cross-cutting concerns such as security, compliance, or Green IT, for example. Based on the ecological characteristics of a business process, proper environmentally sustainable adaptation strategies can be chosen to improve the total environmental impact of the business process. We use ecological sustainable adaptation strategies that are described as green business process patterns. The application of such a green business process pattern, however, affects the business process layer, the application component and the infrastructure layer. This implies that changes in the application infrastructure also need to be considered. Hence, we use best practices of cloud application architectures which are described as Cloud patterns. To guide developers through the adaptation process we propose a pattern-based approach in this work. We correlate Cloud patterns relevant for sustainable business processes to green business process patterns and organize them within a classification. To provide concrete implementation support we further annotate these Cloud patterns to application component models that are described with the topology and orchestration specification for cloud applications (TOSCA). Using these annotations, we describe a method that provides the means to optimize business processes based on green business process patterns through adapting the implementation of application components with concrete TOSCA implementation models. 相似文献
20.
Multi-level modeling is currently regaining attention in the database and software engineering community with different emerging proposals and implementations. One driver behind this trend is the need to reduce model complexity, a crucial aspect in a time of analytics in Big Data that deal with complex heterogeneous data structures. So far no standard exists for multi-level modeling. Therefore, different formalization approaches have been proposed to address multi-level modeling and verification in different frameworks and tools. In this article, we present an approach that integrates the formalization, implementation, querying, and verification of multi-level models. The approach has been evaluated in an open-source F-Logic implementation and applied in a large-scale data interoperability project in the oil and gas industry. The outcomes show that the framework is adaptable to industry standards, reduces the complexity of specifications, and supports the verification of standards from a software engineering point of view. 相似文献
|