首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Statistical Ranking and Selection (R&S) is a collection of experiment design and analysis techniques for selecting the “population” with the largest or smallest mean performance from among a finite set of alternatives. R&S procedures have received considerable research attention in the stochastic simulation community, and they have been incorporated in commercial simulation software. One of the ways that R&S procedures are evaluated and compared is via the expected number of samples (often replications) that must be generated to reach a decision. In this paper we argue that sampling cost alone does not adequately characterize the efficiency of ranking-and-selection procedures, and the cost of switching among the simulations of the alternative systems should also be considered. We introduce two new, adaptive procedures, the minimum switching sequential procedure and the multi-stage sequential procedure with tradeoff, that provide the same statistical guarantees as existing procedures and significantly reduce the expected total computational cost of application, especially when applied to favorable configurations of the competing means.  相似文献   

2.
We present two-stage experiment designs for use in simulation experiments that compare systems in terms of their expected (long-run average) performance. These procedures simultaneously achieve the following with a prespecified probability of being correct: (i) find the best system or a near-best system; (ii) identify a subset of systems that are more than a practically insignificant difference from the best; and (iii) provide a lower confidence bound on the probability that the best or near-best system will be selected. All of the procedures assume normally distributed data, but versions allow unequal variances and common random numbers.  相似文献   

3.
《技术计量学》2013,55(3):208-219
Several forecast-based monitoring methods have been developed for autocorrelated data. One effective method is to use the forecasts based on the exponentially weighted moving average (EWMA). However, during the transition period of dynamic systems, the forecast-based monitoring procedure becomes inadequate due to its use of constant time series model parameters. In this article we present an adaptive forecast-based monitoring approach that performs well on dynamic systems. We examine two competing procedures: the adaptive time series model and the adaptive EWMA. We use a plastic extrusion process with first-order dynamics to illustrate the application of these two procedures, and we also evaluate the performance of the two procedures via simulation.  相似文献   

4.
The relative performance of color constancy algorithms is evaluated. We highlight some problems with previous algorithm evaluation and define more appropriate testing procedures. We discuss how best to measure algorithm accuracy on a single image as well as suitable methods for summarizing errors over a set of images. We also discuss how the relative performance of two or more algorithms should best be compared, and we define an experimental framework for testing algorithms. We reevaluate the performance of six color constancy algorithms using the procedures that we set out and show that this leads to a significant change in the conclusions that we draw about relative algorithm performance as compared with those from previous work.  相似文献   

5.
This paper gives an overview of test procedures developed to assess the performance of full field digital mammography systems. We make a distinction between tests of the individual components of the imaging chain and global system tests. Most tests are not yet fully standardised. Where possible, we illustrate the test methodologies on a selenium flat-panel system.  相似文献   

6.
Kanban control systems have been around for decades and have been used to control work-in-process of manufacturing systems. Lately many variations of the basic control system have been developed; however, much of the work in the development and comparison of control systems has focused on a single-stage manufacturing system producing a single product type. In this research, we present procedures for optimising multiple product kanban control systems, namely Base Stock, Traditional Kanban Control System and Extended Kanban Control System (both dedicated and shared type). We then conduct a detailed simulation study to compare the performance of the systems using a common total cost measure. Numerical results show that the dedicated and shared-extended kanban control systems outperform the other two systems. The study also shows that in spite of their different schematics and contrary to conventional wisdom, the performance of dedicated and shared-extended kanban control systems doesn’t differ much.  相似文献   

7.
In this study, we propose a class of distribution-free progressive censoring test procedures for multi-sample location problem using a stage-wise partially sequential sampling technique. For this, we draw a fixed number of sample observations from one of the populations (say, control) and random number of observations from other populations (say, treatments) using suitable stopping rules. At each stage, two types of control groups, termed as fixed control group (FCG) and updated control group (UCG), are considered. Suitable stopping rules are constructed based on quantiles of the FCG and UCG observations separately. At each stage, FCG consists of initial control observations only; while UCG consists of stage-wise updated combined control observations and previous treatment observations. We examined different large sample results of the proposed tests. We also performed numerical studies to compare the performance of the tests based on FCG and UCG procedures. In addition, we compared the performance of the progressive censoring test procedures with the corresponding competitive terminal test procedure numerically.  相似文献   

8.
I. J. Good 《技术计量学》2013,55(1):125-132
This paper describes from first principles the direct calculation of the operating characteristic function, O.C., the probability of accepting the hypothesis θ = θ0, and the average sample size, A.S.N., required to terminate the test, for any truncated sequential test once the acceptance, rejection, and the continuation regions are specified at each stage. What is needed is to regard a sequential test as a step by step random walk, which is a Markov chain. The method is contrasted with Wald's and two examples are included.  相似文献   

9.
Instead of using expensive multiprocessor supercomputers, parallel computing can be implemented on a cluster of inexpensive personal computers. Commercial accesses to high performance parallel computing are also available on the pay-per-use basis. However, literature on the use of parallel computing in production research is limited. In this paper, we present a dynamic cell formation problem in manufacturing systems solved by a parallel genetic algorithm approach. This method improves our previous work on the use of sequential genetic algorithm (GA). Six parallel GAs for the dynamic cell formation problem were developed and tested. The parallel GAs are all based on the island model using migration of individuals but are different in their connection topologies. The performance of the parallel GA approach was evaluated against a sequential GA as well as the off-shelf optimization software. The results are very encouraging. The considered dynamic manufacturing cell formation problem incorporates several design factors. They include dynamic cell configuration, alternative routings, sequence of operations, multiple units of identical machines, machine capacity, workload balancing, production cost and other practical constraints.  相似文献   

10.
Drum–Buffer–Rope (DBR) is an alternative approach to manufacturing planning and control that is not as formally tested as Material Requirements Planning (MRP) systems which have traditionally been around for years. Yet, some reports indicate very good performance for DBR and the associated use of synchronous manufacturing principles. But how do these systems compare and relate to one another? Based on our experiences of studying a Bearing Manufacturing Company that actually made the transition from an MRP system to a DBR system, we conduct simulation-based experiments in this paper with the objective of providing a more formal comparison between these two systems than what has been offered in prior literature. To our knowledge, this is the only study of its kind that uses a real-world setting to evaluate key differences and convergence points between comprehensive MRP and DBR systems. Our results show that even though the MRP and DBR systems position inventory differently and provide different dynamic responses to customer demand, there are several operating policies that can be implemented in either system. While the DBR performance in our simulation model was clearly superior to a nominal MRP implementation, we show that even within the constraints of the structural design of MRP system, policy modification based on DBR principles can significantly reduce these performance differences. This finding has an important implication for practising managers who need not necessarily switch from a MRP system to a DBR type of a system (as was done by our case-study firm) in order to take advantage of attractive features of the DBR system. Future researchers can use our study to understand more fully how these Structural Design and Operating Policy differences can be further exploited to implement unique systems that combine the best features of both DBR and MRP systems.  相似文献   

11.
Hu Yu 《国际生产研究杂志》2013,51(21):6615-6633
Automated storage and retrieval systems (AS/RSs) are widely used for storing and retrieving products in all types of warehouses. Dwell point policy is a vital control policy that can greatly affect the performance of AS/RSs. In this paper, we study dwell point policies in AS/RSs with input and output stations at opposite ends of the aisle. We first propose two dwell point policies. We find that five existing dwell point policies in the literature are special cases of exactly one of our policies. We then develop expected travel time models for the proposed policies, solve these models with the objective of minimising expected travel time, and obtain closed-form solutions for the optimal dwell location(s). We prove that one proposed policy dominates the other in terms of expected travel time. Numerical experiments are performed to quantify the percentage gap of expected travel time between the proposed policies and policies in the literature. We find that, in some situations, the better proposed policy can achieve up to 8%–10% reduction in expected travel time in comparison with the best literature policy. A real-data case study validates that these situations arise with high probability in typical daily warehouse operations.  相似文献   

12.
Sequential Monte Carlo techniques are evaluated for the nonlinear Bayesian filtering problem applied to systems exhibiting rapid state transitions. When systems show a large disparity between states (long periods of random diffusion about states interspersed with relatively rapid transitions), sequential Monte Carlo methods suffer from the problem known as sample impoverishment. In this paper, we introduce the maximum entropy particle filter, a new technique for avoiding this problem. We demonstrate the effectiveness of the proposed technique by applying it to highly nonlinear dynamical systems in geosciences and econometrics and comparing its performance with that of standard particle-based filters such as the sequential importance resampling method and the ensemble Kalman filter.  相似文献   

13.
A number of multi-criteria inventory classification (MCIC) methods have been proposed in the academic literature. However, most of this literature focuses on the development and the comparison of ranking methods of stock keeping units (SKUs) in an inventory system without any interest in the original and most important goal of this exercise which is the combined service-cost inventory performance. Moreover, to the best of our knowledge these MCIC methods have never been compared in an empirical study. Such an investigation constitutes the objective of this paper. We first present the inventory performance evaluation method that we illustrate based on an example commonly used in the relevant literature which consists of 47 SKUs. Then, we present the empirical investigation that is conducted by means of a large data-set consisting of more than 9086 SKUs and coming from a retailer in the Netherlands that sells do-it-yourself products. The results of the empirical investigation show that the MCIC methods that impose a descending ranking of the criteria, with a dominance of the annual dollar usage and the unit cost criteria, have the lowest combined cost-service performance efficiency.  相似文献   

14.
This paper studies the performance of static and dynamic scheduling approaches in vehicle-based internal transport (VBIT) systems and is one of the first to systematically investigate under which circumstances, which scheduling method helps in improving performance. In practice, usually myopic dispatching heuristics are used, often using look-ahead information. We argue more advanced scheduling methods can help, depending on circumstances. We introduce three basic scheduling approaches (insertion, combined and column generation) for the static problem. We then extend these to a dynamic, real-time setting with rolling horizons. We propose two further real-time scheduling approaches: dynamic assignment with and without look-ahead. The performances of the above five scheduling approaches are compared with two of the best performing look-ahead dispatching rules known from the literature. The performance of the various approaches depends on the facility layout and work distribution. However, column generation, the combined heuristic, and the assignment approach with look-ahead consistently outperform dispatching rules. Column generation can require substantial calculation time but delivers very good performance if sufficient look-ahead information is available. For large scale systems, the combined heuristic and the dynamic assignment approach with look ahead are recommended and have acceptable calculation times.  相似文献   

15.
We review the third-order nonlinear performance of pseudo-stilbene type azobenzenes with an eye to application in ultrafast optical signal processing. We discuss mechanisms responsible for the nonlinear response of the azobenzenes. By aggregating experimental data and theoretical trends reported in the literature, we identify five characteristic regions of optical nonlinear response. Analyzed with respect to Stegeman figures of merit, pseudo-stilbene type azobenzenes show promise for ultrafast optical signal processing in two spectral regions, one lying between the main and two-photon absorption resonances, and the other for wavelengths longer than the two-photon absorption resonance. © 2001 Kluwer Academic Publishers  相似文献   

16.
In this article, we present an algorithm to construct high-order fully symmetric cubature rules for tetrahedral and pyramidal elements, with positive weights and integration points that are in the interior of the domain. Cubature rules are fully symmetric if they are invariant to affine transformations of the domain. We divide the integration points into symmetry orbits where each orbit contains all the points generated by the permutation stars. These relations are represented by equality constraints. The construction of symmetric cubature rules require the solution of nonlinear polynomial equations with both inequality and equality constraints. For higher orders, we use an algorithm that consists of five sequential phases to produce the cubature rules. In the literature, symmetric numerical integration rules are available for the tetrahedron for orders p = 1 –10, 14 , and for the pyramid up to p = 10 . We have obtained fully symmetric cubature rules for both of these elements up to order p = 20 . Numerical tests are presented that verify the polynomial-precision of the cubature rules. Convergence studies are performed for the integration of exponential, weakly singular, and trigonometric test functions over both elements with flat and curved faces. With increase in p, improvements in accuracy is realized, though nonmonotonic convergence is observed.  相似文献   

17.
The two-point maximum entropy method (TPMEM) is a useful method for signal-to-noise ratio enhancement and deconvolution of spectra, but its efficacy is limited under conditions of high background offsets. This means that spectra with high average background levels, regions with high background in spectra with varying background levels, and regions of high signal-to-noise ratios are smoothed less effectively than spectra or spectral regions without these conditions. We report here on the cause of this TPMEM limitation and on appropriate baseline estimation and removal procedures that effectively minimize the effects on regularization. We also present a comparative analysis of TPMEM and Savitzky-Golay filtering to facilitate selection of the best technique under a given range of conditions.  相似文献   

18.
The ideal observer sets an upper limit on the performance of an observer on a detection or classification task. The performance of the ideal observer can be used to optimize hardware components of imaging systems and also to determine another observer's relative performance in comparison with the best possible observer. The ideal observer employs complete knowledge of the statistics of the imaging system, including the noise and object variability. Thus computing the ideal observer for images (large-dimensional vectors) is burdensome without severely restricting the randomness in the imaging system, e.g., assuming a flat object. We present a method for computing the ideal-observer test statistic and performance by using Markov-chain Monte Carlo techniques when we have a well-characterized imaging system, knowledge of the noise statistics, and a stochastic object model. We demonstrate the method by comparing three different parallel-hole collimator imaging systems in simulation.  相似文献   

19.
We study the problem of sequencing n jobs on a three-stage flowshop with multiple identical machines at each stage (a flexible flowshop). The objective is to minimize the makespan. Since the problem is strongly NP-complete. we develop and compare several heuristic procedures of time complexity 0(nlogn). We were successful in deriving the worst case performance bound of one procedure. We have also developed several lower bounds that serve as datum for comparison; the lower bound used in the evaluation is always the best among them. Extensive experiments were conducted to evaluate the performance of the proposed procedures, and preferences are concluded based on their average performance.  相似文献   

20.
Supply chain networks are formed from complex interactions between several companies whose aim is to produce and deliver goods to the customers at specified times and places. Computing the total lead time for customer orders entering such a complex network of companies is an important exercise. In this paper we present analytical models for evaluating the average lead times of make-to-order supply chains. In particular, we illustrate the use of generalized queueing networks to compute the mean and variance of the lead time. We present four interesting examples and develop queueing network models for them. The first two examples consider pipeline supply chains and compute the variance of lead time using queueing network approximations available in the literature. This analysis indicates that for the same percentage increase in variance, an increase at the downstream facility has a far more disastrous effect than the same increase at an upstream facility. Through another example, we illustrate the point that coordinated improvements at all the facilities is important and improvements at individual facilities may not always lead to improvements in the supply chain performance. The existing literature on approximate methods of analysis of forkjoin queueing systems assumes heavy traffic and requires tedious computations. We present here two tractable approximate analytical methods for lead time computation in a class of fork-join queueing systems. Our method is based on the results presented by Clarke in 1961. For the case where the 'joining' servers of the queueing system are of the type D/N/1, we present an easy to use approximate method and illustrate its use in evaluating decisions regarding logistics (for instance, who should own the logistics fleet-the manufacturer or the vendor?) and computing simple upper bounds for delivery reliability, that is the probability that customer desired due dates are met.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号