首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The trade‐offs approach is an advanced tool for the improvement of the discrimination of data envelopment analysis (DEA) models; this can improve the traditional meaning of efficiency as a radial improvement factor for inputs or outputs. Therefore, the Malmquist index – the prominent index for measuring the productivity change of decision making units (DMUs) in multiple time periods that use DEA models with variable returns to scale and constant returns to scale technologies – can be improved by using the trade‐offs technology. Hence, an expanded Malmquist index can be defined as an improved method of a traditional Malmquist index that uses the production possibility set, which could present more discrimination of DMUs, in the presence of the trade‐offs technology. In addition, similar to a traditional Malmquist index, it breaks down into different components. An illustrative example is presented to show the ability of the suggested method of presenting the Malmquist index from a computational point of view.  相似文献   

2.
Software tools can improve the quality and maintainability of software, but are expensive to acquire, deploy, and maintain, especially in large organizations. We explore how to quantify the effects of a software tool once it has been deployed in a development environment. We present an effort-analysis method that derives tool usage statistics and developer actions from a project's change history (version control system) and uses a novel effort estimation algorithm to quantify the effort savings attributable to tool usage. We apply this method to assess the impact of a software tool called VE, a version-sensitive editor used in Bell Labs. VE aids software developers in coping with the rampant use of certain preprocessor directives (similar to #if/#endif in C source files). Our analysis found that developers were approximately 40 percent more productive when using VE than when using standard text editors.  相似文献   

3.
Nowadays, clustered environments are commonly used in high‐performance computing and enterprise‐level applications to achieve faster response time and higher throughput than single machine environments. Nevertheless, how to effectively manage the workloads in these clusters has become a new challenge. As a load balancer is typically used to distribute the workload among the cluster's nodes, multiple research efforts have concentrated on enhancing the capabilities of load balancers. Our previous work presented a novel adaptive load balancing strategy (TRINI) that improves the performance of a clustered Java system by avoiding the performance impacts of major garbage collection, which is an important cause of performance degradation in Java. The aim of this paper is to strengthen the validation of TRINI by extending its experimental evaluation in terms of generality, scalability and reliability. Our results have shown that TRINI can achieve significant performance improvements, as well as a consistent behaviour, when it is applied to a set of commonly used load balancing algorithms, demonstrating its generality. TRINI also proved to be scalable across different cluster sizes, as its performance improvements did not noticeably degrade when increasing the cluster size. Finally, TRINI exhibited reliable behaviour over extended time periods, introducing only a small overhead to the cluster in such conditions. These results offer practitioners a valuable reference regarding the benefits that a load balancing strategy, based on garbage collection, can bring to a clustered Java system. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
Clustered architecture processors are preferred for embedded systems because centralized register file architectures scale poorly in terms of clock rate, chip area, and power consumption. Scheduling for clustered architectures involves spatial concerns (where to schedule) as well as temporal concerns (when to schedule). Various clustered VLIW configurations, connectivity types, and inter‐cluster communication models present different performance trade‐offs to a scheduler. The scheduler is responsible for resolving the conflicting requirements of exploiting the parallelism offered by the hardware and limiting the communication among clusters to achieve better performance. In this paper, we describe our experience with developing a pragmatic scheme and also a generic graph‐matching‐based framework for cluster scheduling based on a generic and realistic clustered machine model. The proposed scheme effectively utilizes the exact knowledge of available communication slots, functional units, and load on different clusters as well as future resource and communication requirements known only at schedule time. The proposed graph‐matching‐based framework for cluster scheduling resolves the phase‐ordering and fixed‐ordering problem associated with earlier schemes for scheduling clustered VLIW architectures. The experimental evaluation in the context of a state‐of‐art commercial clustered architecture (using real‐world benchmark programs) reveals a significant performance improvement over the earlier proposals, which were mostly evaluated using compiled simulation of hypothetical clustered architectures. Our results clearly highlight the importance of considering the peculiarities of commercial clustered architectures and the hard‐nosed performance measurement. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

5.
Kenneth P. Birman 《Software》1999,29(9):741-774
By understanding how real users have employed reliable multicast in real distributed systems, we can develop insight concerning the degree to which this technology has matched expectations. This paper reviews a number of applications with that goal in mind. Our findings point to trade‐offs between the form of reliability used by a system and its scalability and performance. We also find that to reach a broad user community (and a commercially interesting market) the technology must be better integrated with component and object‐oriented systems architectures. Looking closely at these architectures, however, we identify some assumptions about failure handling which make reliable multicast difficult to exploit. Indeed, the major failures of reliable multicast are associated with attempts to position it within object‐oriented systems in ways that focus on transparent recovery from server failures. The broader opportunity appears to involve relatively visible embeddings of these tools into object‐oriented architectures enabling knowledgeable users to make trade‐offs. Fault‐tolerance through transparent server replication may be better viewed as an unachievable holy grail. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

6.
If production trade‐offs—which represent simultaneously feasible exchanges in the inputs and outputs of decision‐making units (DMUs)—are added to an integer production possibility set (IPPS), a new IPPS is produced; conventional axioms of production do not generate a new IPPS, however. This paper develops the axiomatic foundation for data envelopment analysis (DEA) for integer‐value inputs and outputs in the presence of production trade‐offs by introducing a new axiom of “natural trade‐offs.” First, a mixed‐integer linear programming formula called an integer DEA trade‐off (IDEA‐TO) is presented for computing efficiency scores and reference points. The numeration algorithm (NA) method presented in this concept is improved, and an improved numeration algorithm (INA) method for solving integer DEA (IDEA) models is developed. Finally, comparison between the two methods and a generalized INA method for solving the IDEA‐TO model are presented.  相似文献   

7.
Several large real‐world applications have been developed for distributed and parallel architectures. We examine two different program development approaches. First, the usage of a high‐level programming paradigm which reduces the time to create a parallel program dramatically but sometimes at the cost of a reduced performance; a source‐to‐source compiler, has been employed to automatically compile programs—written in a high‐level programming paradigm—into message passing codes. Second, a manual program development by using a low‐level programming paradigm—such as message passing—enables the programmer to fully exploit a given architecture at the cost of a time‐consuming and error‐prone effort. Performance tools play a central role in supporting the performance‐oriented development of applications for distributed and parallel architectures. SCALA—a portable instrumentation, measurement, and post‐execution performance analysis system for distributed and parallel programs—has been used to analyze and to guide the application development, by selectively instrumenting and measuring the code versions, by comparing performance information of several program executions, by computing a variety of important performance metrics, by detecting performance bottlenecks, and by relating performance information back to the input program. We show several experiments of SCALA when applied to real‐world applications. These experiments are conducted for a NEC Cenju‐4 distributed‐memory machine and a cluster of heterogeneous workstations and networks. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

8.
The increasing use of parallel/distributed applications demands a continuous support to take significant advantages from parallel power. This includes the evolution of performance analysis and tuning tools which automatically allows for obtaining a better behavior of the applications. Different approaches and tools have been proposed and they are continuously evolving to cover the requirements and expectations of users. One such tool is MATE (Monitoring Analysis and Tuning Environment), which provides automatic and dynamic tuning for parallel/distributed applications. The knowledge used by MATE to analyze and take decisions is based on performance models which include a set of performance parameters and a set of mathematical expressions modeling the solution of the performance problem. These elements are used by the tuning environment to conduct the monitoring and analysis steps, respectively. The tuning phase depends on the results of the performance analysis. This paper presents a methodology to specify performance models. Each performance model specification can be automatically and transparently translated into a piece of software code encapsulating the knowledge to be straightforwardly included in MATE. Applying this methodology, the user does not have to be involved in the implementation details of MATE, which makes the usage of the tool more transparent. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

9.
Contractors in the construction sector face several trade‐offs between time and cost. The time–cost trade‐off (TCT) is one of these trade‐offs where contractors can reduce a project completion time by assigning more resources to activities, which means spending more money, to shorten the execution times of project activities. On the other hand, contractors who finance their projects through credit lines from banks such that if they reach their credit limits, then the start times of some project activities can be delayed until cash is available again, which might lead to an increase in the project execution time; thus, contractors need to consider the time–credit trade‐off. In this work, we simultaneously consider these two trade‐offs that affect the project completion time and use mixed integer linear programming (MILP) to model the contractor time–cost–credit trade‐off (TCCT) problem. The MILP model minimizes the project execution time given the contractor's budgetary and financial constraints. In addition to the MILP model, we also develop a heuristic solution algorithm to solve the problem. Through a set of benchmark instances, we study the effectiveness of the heuristic algorithm and the computation time of the exact model. It is found that a good upper bound for the MILP results in less computation time. We also study some practical aspects of the problem where we highlight the importance of expediting contractor payments in addition to selecting a financially stable contractor. Finally, we use our MILP model to help a contractor bid for a project.  相似文献   

10.
随着分布式应用系统的广泛使用,对性能测试的要求越来越高,而性能测试通常都需要借助于自动化性能测试工具才能实现.但是,评估工具的能力成熟度从而为选择恰当的工具提供依据,仍然是性能测试人员面临的主要困难.为解决这个问题,本文提出了一个框架,从如下三个方面评估性能测试工具的能力:可靠性,测试能力和资源管理能力.每个评估方面都由一组评估特性组成.最后,以目前市场上最流行的三种产品为例进行了分析和比较.  相似文献   

11.
防火墙作为应用最广泛的网络安全产品,其本身的性能如何将对最终网络用户得到的实际带宽有决定性的影响,因此对防火墙性能的要求越来越高,性能测试也成为防火墙测试的最重要的一部分。本文结合防火墙性能指标探讨了性能测试指标、常用测试工具,并研究提出了使用测量不确定度评定测试指标值的方法。  相似文献   

12.
This paper investigates whether AspectJ can be used for efficient profiling of Java programs. Profiling differs from other applications of AOP (e.g. tracing), since it necessitates efficient and often complex interactions with the target program. As such, it was uncertain whether AspectJ could achieve this goal. Therefore, we investigate four common profiling problems (heap usage, object lifetime, wasted time and time‐spent) and report on how well AspectJ handles them. For each, we provide an efficient implementation, discuss any trade‐offs or limitations and present the results of an experimental evaluation into the costs of using it. Our conclusions are mixed. On the one hand, we find that AspectJ is sufficiently expressive to describe the four profiling problems and reasonably efficient in most cases. On the other hand, we find several limitations with the current AspectJ implementation that severely hamper its suitability for profiling. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

13.
David R. Crowe 《Software》1991,21(5):465-477
This paper describes a set of CPU usage analysis tools, named CPU Probes, with the benefits of both hardware and software tools, and few of their disadvantages. CPU probes use a serial line unit or programmable interval timer as a clock to sample the CPU usage of a system, resulting in fast and accurate CPU usage estimates. CPU usage can be monitored for all processes simultaneously or as little as a single machine instruction. CPU probes are well suited for monitoring real-time system performance. These tools have been implemented on PDP-11/RSX and UNIX System V/386 systems.  相似文献   

14.
Database systems play an important role in nearly every modern organization, yet relatively little research effort has focused on how to test them. This paper discusses issues arising in testing database systems, presents an approach to testing database applications, and describes AGENDA, a set of tools to facilitate the use of this approach. In testing such applications, the state of the database before and after the user's operation plays an important role, along with the user's input and the system output. A framework for testing database applications is introduced. A complete tool set, based on this framework, has been prototyped. The components of this system are a parsing tool that gathers relevant information from the database schema and application, a tool that populates the database with meaningful data that satisfy database constraints, a tool that generates test cases for the application, a tool that checks the resulting database state after operations are performed by a database application, and a tool that assists the tester in checking the database application's output. The design and implementation of each component of the system are discussed. The prototype described here is limited to applications consisting of a single SQL query. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

15.
This paper details a new bias‐dependant small‐signal modeling methodology for monolithic PIN diodes. The frequency‐dependent responses of intrinsic p‐i‐n structures are de‐embedded from monolithic microwave integrated circuit PIN diodes of varying size and layout configuration and fit from 6 to 45 GHz to a classical linear model at each of 15 different bias levels. This methodology results in a bias‐dependent intrinsic diode data set that shows excellent agreement with large samples of small‐signal measurements. The models are useful for comparing trade‐offs in electrical performance among PIN diodes of varying size and layout style. © 2001 John Wiley & Sons, Inc. Int J RF and Microwave CAE 11: 396–403, 2001.  相似文献   

16.
In this paper, we consider the design of an H trade‐off dependent controller, that is, a controller such that, for a given Linear Time‐Invariant plant, a set of performance trade‐offs parameterized by a scalar θ is satisfied. The controller state space matrices are explicit functions of θ. This new problem is a special case of the design of a parameter dependent controller for a parameter dependent plant, which has many application in Automatic Control. This last design problem can be naturally formulated as a convex but infinite dimensional optimization problem involving parameter dependent Linear Matrix Inequality (LMI) constraints. In this paper, we propose finite dimensional (parameter independent) LMI constraints which are equivalent to the parameter dependent LMI constraints. The parameter dependent controller design is then formulated as a convex finite dimensional LMI optimization problem. The obtained result is then applied to the trade‐off dependent controller design. A numerical example emphasizes the strong interest of our finite dimensional optimization problem with respect to the trade‐off dependent control application. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

17.
Dataflow specifications are suitable to describe both signal processing applications and the relative specialized hardware architectures, fostering the hardware–software development gap closure. They can be exploited for the development of automatic tools aimed at the integration of multiple applications on the same coarse-grained computational substrate. In this paper, the multi-dataflow composer (MDC) tool, a novel automatic platform builder exploiting dataflow specifications for the creation of run-time reconfigurable multi-application systems, is presented and evaluated. In order to prove the effectiveness of the adopted approach, a coprocessor for still image and video processing acceleration has been assembled and implemented on both FPGA and 90 nm ASIC technology. 60 % of savings for both area occupancy and power consumption can be achieved with the MDC generated coprocessor compared to an equivalent non-reconfigurable design, without performance losses. Thanks to the generality of high-level dataflow specification approach, this tool can be successfully applied in different application domains.  相似文献   

18.
Program understanding can be assisted by tools that match patterns in the program source. Lexical pattern matchers provide excellent performance and ease of use, but have a limited vocabulary. Syntactic matchers provide more precision, but may sacrifice performance, robustness, or power. To achieve more of the benefits of both models, we extend the pattern syntax of AWK to support matching of abstract syntax trees, as demonstrated in a tool called TAWK. Its pattern syntax is language‐independent, based on abstract tree patterns. As in AWK, patterns can have associated actions, which in TAWK are written in C for generality, familiarity, and performance. The use of C is simplified by high‐level libraries and dynamic linking. To allow processing of program files containing non‐syntactic constructs such as textual macros, mechanisms have been designed that allow matching of ‘language‐like’ macros in a syntactic fashion. We survey and apply prototypical approaches to concretely demonstrate the tradeoffs in program processing. Our results indicate that TAWK can be used to quickly and easily perform a variety of common software engineering tasks, and the extensions to accommodate non‐syntactic features significantly extend the generality of syntactic matchers. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

19.
Antireflection (AR) coatings can improve the viewing experience of a display, including mobile electronic devices such as smartphones, tablets, and laptops that are typically operated on battery power and protected with chemically strengthened glass. In this work, we discuss the trade‐offs between optimal user viewing experience in brightly lit environments and battery lifetime for an AR versus a non‐AR mobile display. We show that under 400–1,000 lux ambient illumination, an AR‐based display can be operated at >30% lower luminance than a non‐AR display with similar human perception of contrast based on a perceptual contrast length model, resulting in a potential improvement of >15% in device battery lifetime and a similar proportion of energy savings.  相似文献   

20.
SIP协议测试方法和测试工具的研究   总被引:2,自引:0,他引:2  
介绍SIP性能测试领域的一些技术背景,包括在测试中使用的性能评价标准以及测试方案.在对SIP性能测试方法和测试工具综合研究的基础上,设计一种新的测试工具的工作模式以及媒体流的传输机制,使得这种测试工具能够支持大量的媒体流并发的测试,并且突破了对于媒体源进行编解码的难点,提出一种新的解决方案.这种SIP性能测试工具可以为被测系统提供全面准确的性能测试结果,提高了测试结果的可信度.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号