首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
We study the dynamic assignment of cross‐trained servers to stations in understaffed lines with finite buffers. Our objective is to maximize the production rate. We identify optimal server assignment policies for systems with three stations, two servers, different flexibility structures, and either deterministic service times and arbitrary buffers or exponential service times and small buffers. We use these policies to develop server assignment heuristics for Markovian systems with larger buffer sizes that appear to yield near‐optimal throughput. In the deterministic setting, we prove that the best possible production rate with full server flexibility and infinite buffers can be attained with partial flexibility and zero buffers, and we identify the critical skills required to achieve this goal. We then present numerical results showing that these critical skills, employed with an effective server assignment policy, also yield near‐optimal throughput in the Markovian setting, even for small buffer sizes. Thus, our results suggest that partial flexibility is sufficient for near‐optimal performance, and that flexibility structures that are effective for deterministic and infinite‐buffered systems are also likely to perform well for finite‐buffered stochastic systems.  相似文献   

2.
We consider a service system with two types of customers. In such an environment, the servers can either be specialists (or dedicated) who serve a specific customer type, or generalists (or flexible) who serve either type of customers. Cross‐trained workers are more flexible and help reduce system delay, but also contribute to increased service costs and reduced service efficiency. Our objective is to provide insights into the choice of an optimal workforce mix of flexible and dedicated servers. We assume Poisson arrivals and exponential service times, and use matrix‐analytic methods to investigate the impact of various system parameters such as the number of servers, server utilization, and server efficiency on the choice of server mix. We develop guidelines for managers that would help them to decide whether they should be either at one of the extremes, i.e., total flexibility or total specialization, or some combination. If it is the latter, we offer an analytical tool to optimize the server mix.  相似文献   

3.
Motivated by the technology division of a financial services firm, we study the problem of capacity planning and allocation for Web‐based applications. The steady growth in Web traffic has affected the quality of service (QoS) as measured by response time (RT), for numerous e‐businesses. In addition, the lack of understanding of system interactions and availability of proper planning tools has impeded effective capacity management. Managers typically make decisions to add server capacity on an ad hoc basis when systems reach critical response levels. Very often this turns out to be too late and results in extremely long response times and the system crashes. We present an analytical model to understand system interactions with the goal of making better server capacity decisions based on the results. The model studies the relationships and important interactions between the various components of a Web‐based application using a continuous time Markov chain embedded in a queuing network as the basic framework. We use several structured aggregation schemes to appropriately represent a complex system, and demonstrate how the model can be used to quickly predict system performance, which facilitates effective capacity allocation decision making. Using simulation as a benchmark, we show that our model produces results within 5% accuracy at a fraction of the time of simulation, even at high traffic intensities. This knowledge helps managers quickly analyze the performance of the system and better plan server capacity to maintain desirable levels of QoS. We also demonstrate how to utilize a combination of dedicated and shared resources to achieve QoS using fewer servers.  相似文献   

4.
We develop stochastic models to help manage the pace of play on a conventional 18‐hole golf course. These models are for group play on each of the standard hole types: par‐3, par‐4, and par‐5. These models include the realistic feature that k−2 groups can be playing at the same time on a par‐k hole, but with precedence constraints. We also consider par‐3 holes with a “wave‐up” rule, which allows two groups to be playing simultaneously. We mathematically determine the maximum possible throughput on each hole under natural conditions. To do so, we analyze the associated fully loaded holes, in which new groups are always available to start when the opportunity arises. We characterize the stationary interval between the times successive groups clear the green on a fully loaded hole, showing how it depends on the stage playing times. The structure of that stationary interval evidently can be exploited to help manage the pace of play. The mean of that stationary interval is the reciprocal of the capacity. The bottleneck holes are the holes with the least capacity. The bottleneck capacity is then the capacity of the golf course as a whole.  相似文献   

5.
For nonstationary queuing systems where demand varies over time, an important practical issue is scheduling the number of servers to be available at various times of the day. Widely used scheduling procedures typically involve adding servers at natural time points (e.g., on the hour or at half past the hour) during peak demand periods. Scheduling is often complicated by restrictions on the minimum amount of time (human) servers must work, the earliest (or latest) time a server is available, and limits on the maximum number of servers that can be used at any one time. This paper was motivated by experience with actual queuing systems that embodied such complications. For these systems common scheduling methods that used “natural” starting times for servers resulted in needlessly long customer waits. This research demonstrates that changing the starting times of servers by only a few minutes can have dramatic impacts on customer waiting times for extended periods. In addition, the results highlight the importance of server punctuality.  相似文献   

6.
Discretionary commonality is a form of operational flexibility used in multi‐product manufacturing environments. Consider a case where a firm produces and sells two products. Without discretionary commonality, each product is made through a unique combination of input and production capacity. With discretionary commonality, one of the inputs could be used for producing both products, and one of the production capacities could be used to process different inputs for producing one of the products. In the latter case, the manager can decide, upon the realization of uncertainty, not only the quantities for different products (outputs) but also the means of transforming inputs into outputs. The objective of this study is to understand how the firm's value, its inventory levels for inputs and capacity levels for resources are affected by the demand characteristics and market conditions. In pursuing this research, we extend Van Mieghem and Rudi ( 2002 )'s newsvendor network model to allow for the modeling of product interdependence, demand functions, random shocks, and firm's ex post pricing decision. Applying the general framework to the network with discretionary commonality, we discover that inventory and capacity management can be quite different compared to a network where commonality is non‐discretionary. Among other results, we find that as the degree of product substitution increases, the relative need for discretionary commonality increases; as the market correlation increases, while the firm's value may increase for complementary products, the discretionary common input decreases but the dedicated input increases. Numerical study shows that discretionary flexibility and responsive pricing are strategic substitutes.  相似文献   

7.
Based on a benchmark job-lot manufacturing system a simulation study was carried out to compare the performance of just-in-time (JIT) shop control system Kanban with conventional job-shop control procedures. The shop control policies were tested under a good manufacturing environment and the effects of job mix and load capacity bottlenecks on various shop control policies were tested. From the simulation results, it is inferred that there are shop control procedures that perform better than the Kanban in a job shop. It has been observed that even with adequate capacity, bottleneck areas surface due to fluctuations in the shop load. Kanban is not appropriate in such a situation because capacity bottlenecks can significantly reduce the ell'ectiveness of a pull system. The disparateness in the processing requirements for jobs can seriously undermine the performance of the shop. This is the type of shop environment where the shop control procedures will be most effective. Although Kanban came out best when the load capacity bottlenecks and the disparateness of the job mix were removed, the selected shop control variable combinations closely approximated the Kanban result. Although many features of JIT can be implemented in any system, companies trying to adopt JIT should remember that Kanban requires a rigid system intolerant of any deviation.  相似文献   

8.
Abstract

Wiegand and Geller propose that the salient role of positive reinforcement in behavior analysis should enable a melding of behavior analysis with developments and concepts that have appeared under the banner of “positive psychology.” However, as is true of many words, the term positive has more than one meaning, and the positive of positive reinforcement is not the same as the positive of “positive psychology.” The latter is parasitic upon the vernacular, as “nice” or “desirable,” whereas the former is analogous to the algebraic “add” as when an action produces the appearance (as contrasted with the removal) of some event. The distinct meanings become clear with recognition that addictive and criminal behavior often are maintained through positive reinforcement, and that negative reinforcement of behavior often is benign and beneficial to the persons involved. In addition, most of the phenomena identified with positive psychology that Wiegand and Geller propose to embrace entail more subtle and complex combinations of behavioral principles than these authors acknowledge. Wiegand and Geller also propose to accommodate vernacular assumptions in ways that separate their approach from its conceptual base; this risks impairing the effectiveness of their work whether or not its marketability would be improved.  相似文献   

9.
In this article, I argued that in contexts in which tipping is customary, there is a moral duty to tip or to explicitly tell the server that you will not be tipping. The evidence for this rests on anecdotes about people's mental states, and customers and server's intuitions about duties that would arise were a customer unable to tip his server. The promise is a speech act that is implicit in ordering food. The speech act must be matched by the server's uptake, which is implicit in her taking the order. The promise argument rests on an actual promise and not a merely hypothetical promise. If there is such a duty, then in the absence of an explicit content, its content is likely set by convention. The convention is that customers tip 15–20%. Thus, customers have a duty to tip servers 15–20%. Other purported moral considerations do not ground this duty. These include custom, desirable incentives, role‐relative obligation, and gratitude.  相似文献   

10.
《Risk analysis》2018,38(8):1672-1684
A disease burden (DB) evaluation for environmental pathogens is generally performed using disability‐adjusted life years with the aim of providing a quantitative assessment of the health hazard caused by pathogens. A critical step in the preparation for this evaluation is the estimation of morbidity between exposure and disease occurrence. In this study, the method of a traditional dose–response analysis was first reviewed, and then a combination of the theoretical basis of a “single‐hit” and an “infection‐illness” model was performed by incorporating two critical factors: the “infective coefficient” and “infection duration.” This allowed a dose–morbidity model to be built for direct use in DB calculations. In addition, human experimental data for typical intestinal pathogens were obtained for model validation, and the results indicated that the model was well fitted and could be further used for morbidity estimation. On this basis, a real case of a water reuse project was selected for model application, and the morbidity as well as the DB caused by intestinal pathogens during water reuse was evaluated. The results show that the DB attributed to Enteroviruses was significant, while that for enteric bacteria was negligible. Therefore, water treatment technology should be further improved to reduce the exposure risk of Enteroviruses . Since road flushing was identified as the major exposure route, human contact with reclaimed water through this pathway should be limited. The methodology proposed for model construction not only makes up for missing data of morbidity during risk evaluation, but is also necessary to quantify the maximum possible DB.  相似文献   

11.
We show how a simple normal approximation to Erlang's delay formula can be used to analyze capacity and staffing problems in service systems that can be modeled as M/M/s queues. The numbers of servers, s, needed in an M/M/s queueing system to assure a probability of delay of, at most, p can be well approximated by sp + z***I-p+, where z1-p, is the (1 - p)th percentile of the standard normal distribution and ρ, the presented load on the system, is the ratio of Λ, the customer arrival rate, to μ, the service rate. We examine the accuracy of this approximation over a set of parameters typical of service operations ranging from police patrol, through telemarketing to automatic teller machines, and we demonstrate that it tends to slightly underestimate the number of servers actually needed to hit the delay probability target—adding one server to the number suggested by the above formula typically gives the exact result. More importantly, the structure of the approximation promotes operational insight by explicitly linking the number of servers with server utilization and the customer service level. Using a scenario based on an actual teleservicing operation, we show how operations managers and designers can quickly obtain insights about the trade-offs between system size, system utilization and customer service. We argue that this little used approach deserves a prominent role in the operations analyst's and operations manager's toolbags.  相似文献   

12.
We study network games in which users choose routes in computerized networks susceptible to congestion. In the “unsplittable” condition, route choices are completely unregulated, players are symmetric, each player controls a single unit of flow and chooses a single origin–destination (OD) path. In the “splittable” condition, which is the main focus of this study, route choices are partly regulated, players are asymmetric, each player controls multiple units of flow and chooses multiple O–D paths to distribute her fleet. In each condition, users choose routes in two types of network: a basic network with three parallel routes and an augmented network with five routes sharing joint links. We construct and subsequently test equilibrium solutions for each combination of condition and network type, and then propose a Markov revision protocol to account for the dynamics of play. In both conditions, route choice behavior approaches equilibrium and the Braess Paradox is clearly manifested.  相似文献   

13.
Conference organizers often face the following problem: Given a collection of submitted papers, select a subset to be presented at the conference. The bulk of the work often amounts to assembling a pool of reviewers and then sending each submitted paper to several reviewers. We present in this paper a technique for finding a good assignment of papers to reviewers. An important feature of the solution we find is that each paper is sent to at least one reviewer who is “as expert as possible” for that paper. A major component of the problem is modeled as a bottleneck version of a capacitated transshipment problem.  相似文献   

14.
Samuelson stated in 1967 that “the beauty about social insurance is that ... everyone who reaches retirement age is given benefit privileges that far exceed anything he has paid in”. Such an optimistic belief seems to have been widely shared in Italy, where until the beginning of the reform process in 1992 social security could be described as a continuous succession of highly generous and diversified promises of payment made by the state to the different categories of workers on the basis of salary earned in the final stage of working life. The pension reform introduced by the Dini government in 1995 led to the adoption of a contribution-based method of calculation, which meant a return to the forgotten “golden rule” that the financial equilibrium of pay-as-you-go systems is ensured only if the implicit yield is equal to the rate of growth of the taxable basis of social security contributions. Equilibrium would thus be safeguarded, restoring itself automatically after any accidental disruption caused by demographic or economic upheaval, and operating regardless of the capacity and will of governments and of the majorities supporting them. The great efforts made to build up sufficient consensus with respect to such radical modifications of principle were, however, accompanied by a marked caution in bringing the system into full effect. This has left the country with the problem of accelerating transition to the new mechanism of calculating contributions, applied initially only to the newly employed and pro rata to workers with less than 18 years of contributions paid in, thereby making for a very long period of transition. In such a connection, a recent proposal has suggested that the state should try to induce workers to agree freely to a reduction in their accrued pension entitlements through the public system in return for a share in the process of privatization. If government were to repay the pension debt “below par”, this would allow for greater savings on future expenditure by using part of the revenues of privatization to pay off the pension debt in advance rather than by using these sums to pay off the national debt. More radical approaches aiming at cutting back social spending, would fail to take into account the risks involved in the collapse of public trust and of the structures that have hitherto guaranteed the cohesion of Italian society and the conditions for entrepreneurial commitment. On the other hand, an unbridled bottom-up proliferation of networks of social cohesion, supplementary voluntary bodies and non-profit initiatives may involve the risk of further arbitrary action being taken in the name of income redistribution. The social market requires bottom-up action on the part of associations, but also the guarantee of state-imposed rules that are equal for all parties and of a market that is free from the distortions of competition regulated from the top. A welfare state that has too often disguised the redistribution of resources in non-transparent forms must be replaced by a transparent welfare system effecting an explicit redistribution of resources and allowing a suitably regulated market to operate without indulging continually in further forms of “correction”. This calls for the introduction of a microchip “citizen card”, able to offer characteristics both of uniformity and of fine-tuning in terms of specific conditions of age, income, assets, education, etc., so as to permit forms of selection and/or cost sharing where desirable. Some of the rights to welfare services incorporated in the “citizen card” could in fact be assigned in monetary form but restricted to specific uses. Such “social money”, conveniently based on modern technological transaction structures, could become the money of the state sector, the private sector, and the third sector of non-profit organizations and associations, enabling all parties to respond to the objective demand expressed by citizens in conditions of competition that are free of supply-side distortion.  相似文献   

15.
夏雨  郭凤君  魏明侠  方磊 《管理学报》2022,19(1):119-128
以“蚂蚁金服”事件网络评论为样本,借鉴生命周期理论划分评论发展阶段,运用词云图与语义网络进行文本特征可视化和关联分析,并基于LDA模型和语义情感分析考察各阶段评论主题演进趋势和特征,从而挖掘事件的互联网金融监管蕴意。研究表明:相关评论主题在不同阶段各有侧重,但总体呈现逐步细化、深入的特征;“平台垄断”“数据保护”主题最为突出;监管参与者范围扩大成为主要关注点;爆发和平息阶段的评论感情倾向变化较大。为此,互联网金融监管应注重平台防垄断、用户数据保护及多主体协同监管,重点关注爆发与平息阶段的互联网金融舆情引导。  相似文献   

16.
We consider a group of strategic agents who must each repeatedly take one of two possible actions. They learn which of the two actions is preferable from initial private signals and by observing the actions of their neighbors in a social network. We show that the question of whether or not the agents learn efficiently depends on the topology of the social network. In particular, we identify a geometric “egalitarianism” condition on the social network that guarantees learning in infinite networks, or learning with high probability in large finite networks, in any equilibrium. We also give examples of nonegalitarian networks with equilibria in which learning fails.  相似文献   

17.
一旦房地产投资者“追涨杀跌”行为发生大范围扩散,将诱发房地产市场的价格严重偏离甚至动荡。本文从传播学与行为金融学交叉视角,基于复杂网络理论构建了房地产投资者“追涨杀跌”行为的网络扩散模型,进而仿真分析了网络舆情与投资者行为偏好交互作用下房地产投资者“追涨杀跌”行为扩散的演化特征。研究表明:房地产投资者越偏向于随机建立关联,“追涨杀跌”行为越可能快速扩散;网络舆情因素在“追涨杀跌”行为扩散过程中发挥主导作用,同时对投资者行为偏好因素发挥较强的“强化效应”或“抑制效应”;当媒体权威性足够高或网民情绪足够乐观时,通过综合调节网络舆情和投资者行为偏好因素能够有效实现预防和限制“追涨杀跌”行为扩散的目的。  相似文献   

18.
We consider the problem of managing demand risk in tactical supply chain planning for a particular global consumer electronics company. The company follows a deterministic replenishment‐and‐planning process despite considerable demand uncertainty. As a possible way to formally address uncertainty, we provide two risk measures, “demand‐at‐risk” (DaR) and “inventory‐at‐risk” (IaR) and two linear programming models to help manage demand uncertainty. The first model is deterministic and can be used to allocate the replenishment schedule from the plants among the customers as per the existing process. The other model is stochastic and can be used to determine the “ideal” replenishment request from the plants under demand uncertainty. The gap between the output of the two models as regards requested replenishment and the values of the risk measures can be used by the company to reallocate capacity among different products and to thus manage demand/inventory risk.  相似文献   

19.
Observing that patients with longer appointment delays tend to have higher no‐show rates, many providers place a limit on how far into the future that an appointment can be scheduled. This article studies how the choice of appointment scheduling window affects a provider's operational efficiency. We use a single server queue to model the registered appointments in a provider's work schedule, and the capacity of the queue serves as a proxy of the size of the appointment window. The provider chooses a common appointment window for all patients to maximize her long‐run average net reward, which depends on the rewards collected from patients served and the “penalty” paid for those who cannot be scheduled. Using a stylized M/M/1/K queueing model, we provide an analytical characterization for the optimal appointment queue capacity K, and study how it should be adjusted in response to changes in other model parameters. In particular, we find that simply increasing appointment window could be counterproductive when patients become more likely to show up. Patient sensitivity to incremental delays, rather than the magnitudes of no‐show probabilities, plays a more important role in determining the optimal appointment window. Via extensive numerical experiments, we confirm that our analytical results obtained under the M/M/1/K model continue to hold in more realistic settings. Our numerical study also reveals substantial efficiency gains resulted from adopting an optimal appointment scheduling window when the provider has no other operational levers available to deal with patient no‐shows. However, when the provider can adjust panel size and overbooking level, limiting the appointment window serves more as a substitute strategy, rather than a complement.  相似文献   

20.
This work considers the value of the flexibility offered by production facilities that can easily be configured to produce new products. We focus on technical uncertainty as the driver of this value, while prior works focused only on demand uncertainty. Specifically, we evaluate the use of process flexibility in the context of risky new product development in the pharmaceutical industry. Flexibility has value in this setting due to the time required to build dedicated capacity, the finite duration of patent protection, and the probability that the new product will not reach the market due to technical or regulatory reasons. Having flexible capacity generates real options, which enables firms to delay the decision about constructing product‐specific capacity until the technical uncertainty is resolved. In addition, initiating production in a flexible facility can enable the firm to optimize production processes in dedicated facilities. The stochastic dynamic optimization problem is formulated to analyze the optimal capacity and allocation decisions for a flexible facility, using data from existing literature. A solution to this problem is obtained using linear programming. The result of this analysis shows both the value of flexible capacity and the optimal capacity allocation. Due to the substantial costs involved with flexibility in this context, the optimal level of flexible capacity is relatively small, suggesting products be produced for only short periods before initiating construction of dedicated facilities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号