首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 156 毫秒
1.
Karabuk  Suleyman  Wu  S. David 《IIE Transactions》2002,34(9):743-759
Semiconductor capacity planning is a cross-functional decision that requires coordination between the marketing and manufacturing divisions. We examine the main issues of a decentralized coordination scheme in a setting observed at a major US semiconductor manufacturer: marketing managers reserve capacity from manufacturing based on product demands, while attempting to maximize profit; manufacturing managers allocate capacity to competing marketing managers so as to minimize operating costs while ensuring efficient resource utilization. This cross-functional planning problem has two important characteristics: (i) both demands and capacity are subject to uncertainty; and (ii) all decision entities own private information while being self-interested. To study the issues of coordination we first formulate the local marketing and the manufacturing decision problem as separate stochastic programs. We then formulate a centralized stochastic programming model (JCA), which maximizes the firm's overall profit. JCA establishes a theoretical benchmark for performance, but is only achievable when all planning information is public. If local decision entities are to keep their planning information private, we submit that the best achievable coordination corresponds to an alternative stochastic model (DCA). We analyze the relationship and the theoretical gap between (JCA) and )DCA), thereby establishing the price of decentralization. Next, we examine two mechanisms that coordinate the marketing and manufacturing decisions to achieve (DCA) using different degrees of information exchange. Using insights from the Auxiliary Problem Principle (APP), we show that under both coordination mechanisms the divisional proposals converge to the global optimal solution of (DCA). We illustrate the theoretic insights using numerical examples as well as a real world case.  相似文献   

2.
Published studies and audits have documented that a significant number of U.S. Army systems are failing to demonstrate established reliability requirements. In order to address this issue, the Army developed a new reliability policy in December 2007 which encourages use of cost-effective reliability best practices. The intent of this policy is to improve reliability of Army systems and material, which in turn will have a significant positive impact on mission effectiveness, logistics effectiveness and life-cycle costs. Under this policy, the Army strongly encourages the use of Physics of Failure (PoF) analysis on mechanical and electronics systems. At the US Army Materiel Systems Analysis Activity, PoF analyses are conducted to support contractors, program managers and engineers on systems in all stages of acquisition from design, to test and evaluation (T&E) and fielded systems. This article discusses using the PoF approach to improve reliability of military products. PoF is a science-based approach to reliability that uses modeling and simulation to eliminate failures early in the design process by addressing root-cause failure mechanisms in a computer-aided engineering environment. The PoF approach involves modeling the root causes of failure such as fatigue, fracture, wear, and corrosion. Computer-aided design tools have been developed to address various loads, stresses, failure mechanisms, and failure sites. This paper focuses on understanding the cause and effect of physical processes and mechanisms that cause degradation and failure of materials and components. A reliability assessment case study of circuit cards consisting of dense circuitry is discussed. System level dynamics models, component finite element models and fatigue-life models were used to reveal the underlying physics of the hardware in its mission environment. Outputs of these analyses included forces acting on the system, displacements of components, accelerations, stress levels, weak points in the design and probable component life. This information may be used during the design process to make design changes early in the acquisition process when changes are easier to make and are much more cost effective. Design decisions and corrective actions made early in the acquisition phase leads to improved efficiency and effectiveness of the T&E process. The intent is to make fixes prior to T&E which will reduce test time and cost, allow more information to be obtained from test and improve test focus. PoF analyses may be conducted for failures occurring during test to better understand the underlying physics of the problem and identify the root cause of failures which may lead to better fixes for problems discovered, reduced test-fix-test iterations and reduced decision risk. The same analyses and benefits mentioned above may be applied to systems which are exhibiting failures in the field.  相似文献   

3.
During the last 30 years, enterprise modelling has been recognised as an efficient tool to externalise the knowledge of companies in order to understand their operations, to analyse their running and to design new systems from several points of view: functions, processes, decisions, resources and information technology. This paper aims at describing the long evolution of enterprise modelling techniques as well as one of the future challenges of these techniques: the transformation of enterprise models. So, in a first part, the paper describes the evolution of enterprise modelling techniques from the divergence era to the convergence period. In a second time, the paper focuses on the recent advances in the use of enterprise models through model-driven approaches, interoperability problem-solving and simulation, all these advances having the same characteristic to use the transformation of enterprise models.  相似文献   

4.
A. Barreiros 《工程优选》2013,45(5):475-488
A new numerical approach to the solution of two-stage stochastic linear programming problems is described and evaluated. The approach avoids the solution of the first-stage problem and uses the underlying deterministic problem to generate a sequence of values of the first-stage variables which lead to successive improvements of the objective function towards the optimal policy. The model is evaluated using an example in which randomness is described by two correlated factors. The dynamics of these factors are described by stochastic processes simulated using lattice techniques. In this way, discrete distributions of the random parameters are assembled. The solutions obtained with the new iterative procedure are compared with solutions obtained with a deterministic equivalent linear programming problem. It is concluded that they are almost identical. However, the computational effort required for the new approach is negligible compared with that needed for the deterministic equivalent problem.  相似文献   

5.
In many real-world optimization problems, the underlying objective and constraint function(s) are evaluated using computationally expensive iterative simulations such as the solvers for computational electro-magnetics, computational fluid dynamics, the finite element method, etc. The default practice is to run such simulations until convergence using termination criteria, such as maximum number of iterations, residual error thresholds or limits on computational time, to estimate the performance of a given design. This information is used to build computationally cheap approximations/surrogates which are subsequently used during the course of optimization in lieu of the actual simulations. However, it is possible to exploit information on pre-converged solutions if one has the control to abort simulations at various stages of convergence. This would mean access to various performance estimates in lower fidelities. Surrogate assisted optimization methods have rarely been used to deal with such classes of problem, where estimates at various levels of fidelity are available. In this article, a multiple surrogate assisted optimization approach is presented, where solutions are evaluated at various levels of fidelity during the course of the search. For any solution under consideration, the choice to evaluate it at an appropriate fidelity level is derived from neighbourhood information, i.e. rank correlations between performance at different fidelity levels and the highest fidelity level of the neighbouring solutions. Moreover, multiple types of surrogates are used to gain a competitive edge. The performance of the approach is illustrated using a simple 1D unconstrained analytical test function. Thereafter, the performance is further assessed using three 10D and three 20D test problems, and finally a practical design problem involving drag minimization of an unmanned underwater vehicle. The numerical experiments clearly demonstrate the benefits of the proposed approach for such classes of problem.  相似文献   

6.
This work considers an NP-Hard scheduling problem that is fundamental to the production planning of flexible machines, to cutting pattern industries and also to the design of VLSI circuits. A new asynchronous collective search model is proposed, exploring the search space in a manner that concentrates effort onto those areas of higher perceived potential. This is done with the use of a coordination policy which enables the processes with greatest performance to act as ‘attractors’ to those processes trapped in areas of worse perceived potential. Numerical results are obtained for problems of realistic industrial size, and those results are compared to previously published optimal solutions. This comparison demonstrates the effectiveness of the method, as in 276 problems out of a set of 280 we are able to match previously reported optimal results.  相似文献   

7.
One of the most important decisions in hybrid make-to-stock/make-to-order (MTS/MTO) production systems is capacity coordination. This paper addresses capacity coordination of hybrid MTS/MTO production systems which deal with MTS, MTO and MTS/MTO products. The proposed model is developed to cope with order acceptance/rejection policy, order due-date setting, lot-sizing of MTS products and determining required capacity during the planning horizon. Additionally, a backward lot-sizing algorithm is developed to tackle the lot-sizing problem. The proposed model presents a general framework to decide on capacity coordination without too many limiting mathematical assumptions. The model combines qualitative and quantitative modules to cope with the aforementioned problems. Finally, a real industrial case study is reported to provide validity and applicability of the proposed model. Having the model applied in the case study, considerable improvement was achieved.  相似文献   

8.
In this work we present a mixed-integer model for the optimal design of production/transportation systems. In contrast to standard design problems, our model is originally based on a coupled system of differential equations capturing the dynamics of manufacturing processes and stocks. The problem is to select an optimal parameter configuration from a predefined set such that respective constraints are fulfilled. We focus on single commodity flows over large time scales as well as highly interconnected networks and propose a suitable start heuristic to ensure feasibility and to speed up the solution procedure.  相似文献   

9.
We study the integrated logistics network design and inventory stocking problem as characterized by the interdependency of the design and stocking decisions in service parts logistics. These two sets of decisions are usually considered sequentially in practice, and the associated problems are tackled separately in the research literature. The overall problem is typically further complicated due to time-based service constraints that provide lower limits on the percentage of demand satisfied within specified time windows. We introduce an optimization model that explicitly captures the interdependency between network design (location of facilities, and allocation of demands to facilities) and inventory stocking decisions (stock levels and their corresponding stochastic fill rates), and present computational results from our extensive experiments that investigate the effects of several factors including demand levels, time-based service levels and costs. We show that the integrated approach can provide significant cost savings over the decoupled approach (solving the network design first and inventory stocking next), shifting the whole efficient frontier curve between cost and service level to superior regions. We also show that the decoupled and integrated approaches may generate totally different solutions, even in the number of located facilities and in their locations, magnifying the importance of considering inventory as part of the network design models.  相似文献   

10.
This study presents an efficient methodology that derives design alternatives and performance criteria for safety functions/systems in commercial nuclear power plants. Determination of the design alternatives and intermediate-level performance criteria is posed as a reliability allocation problem. The reliability allocation is performed in a single step by means of the concept of two-tier noninferior solutions in the objective and risk spaces within the top-level probabilistic safety criteria (PSC). Two kinds of two-tier noninferior solutions are obtained: desirable design alternatives and intolerable intermediate-level PSC of safety functions/systems.The weighted Chebyshev norm (WCN) approach with an improved Metropolis algorithm in simulated annealing is used to find the two-tier noninferior solutions. This is very efficient in searching for the global minimum of the difficult multiobjective optimization problem (MOP) which results from strong nonlinearity of a probabilistic safety assessment (PSA) model and nonconvexity of the problem. The methodology developed in this study can be used as an efficient design tool for desirable safety function/system alternatives and for the determination of intermediate-level performance criteria.The methodology is applied to a realistic streamlined PSA model that is developed based on the PSA results of the Surry Unit 1 nuclear power plant. The methodology developed in this study is very efficient in providing the intolerable intermediate-level PSC and desirable design alternatives of safety functions/systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号