首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In procurement auctions, the object for sale is a contract, bidders are suppliers, and the bid taker is a buyer. The suppliers bidding for the contract are usually the current supplier (the incumbent) and a group of potential new suppliers (the entrants). As the buyer has an ongoing relationship with the incumbent, he needs to adjust the bids of the entrants to include non‐price attributes, such as the switching costs. The buyer can run a scoring auction, in which suppliers compete on the adjusted bids or scores, or, he can run a buyer‐determined auction, in which suppliers compete on the price, and the buyer adjusts a certain number of the bids with the non‐price attributes after the auction to determine the winner. Unless the incumbent has a significant cost advantage over the entrants, I find that the scoring auction yields a lower average cost for the buyer, if the non‐price attributes are available. If the non‐price attributes are difficult or expensive to obtain, the buyer could run a buyer‐determined auction adjusting only the lowest price bid.  相似文献   

2.
The paper presents a survey of current industry practices in designing and running auctions as part of e‐sourcing events. We report our findings from numerous interviews with auction makers in leading e‐sourcing application vendors. The differences between auction theory and auction practice pose a number of interesting and important research questions for the Operations Management community; we conclude with a discussion of lessons learned and open research questions.  相似文献   

3.
Motivated by the enormous growth of keyword advertising, this paper explores the design of performance‐based unit‐price contract auctions, in which bidders bid their unit prices and the winner is chosen based on both their bids and performance levels. The previous literature on unit‐price contract auctions usually considers a static case where bidders' performance levels are fixed. This paper studies a dynamic setting in which bidders with a low performance level can improve their performance at a certain cost. We examine the effect of the performance‐based allocation on overall bidder performance, auction efficiency, and the auctioneer's revenue, and derive the revenue‐maximizing and efficient policies accordingly. Moreover, the possible upgrade in bidders' performance level gives the auctioneer an incentive to modify the auction rules over time, as is confirmed by the practice of Yahoo! and Google. We thus compare the auctioneer's revenue‐maximizing policies when she is fully committed to the auction rule and when she is not, and show that the auctioneer should give less preferential treatment to low‐performance bidders when she is fully committed.  相似文献   

4.
We experimentally investigate the sensitivity of bidders demanding multiple units of a homogeneous commodity to the demand reduction incentives inherent in uniform price auctions. There is substantial demand reduction in both sealed bid and ascending price clock auctions with feedback regarding rivals' drop‐out prices. Although both auctions have the same normal form representation, bidding is much closer to equilibrium in the ascending price auctions. We explore the behavioral process underlying these differences along with dynamic Vickrey auctions designed to eliminate the inefficiencies resulting from demand reduction in the uniform price auctions.  相似文献   

5.
We study the monotonicity of the equilibrium bid with respect to the number of bidders n in affiliated private‐value models of first‐price sealed‐bid auctions and prove the existence of a large class of such models in which the equilibrium bid function is not increasing in n. We moreover decompose the effect of a change in n on the bid level into a competition effect and an affiliation effect. The latter suggests to the winner of the auction that competition is less intense than she had thought before the auction. Since the affiliation effect can occur in both private‐ and common‐value models, a negative relationship between the bid level and n does not allow one to distinguish between the two models and is also not necessarily (only) due to bidders taking account of the winner's curse.  相似文献   

6.
This study is the first proposing allocatively efficient multi‐attribute auctions for the procurement of multiple items. In the B2B e‐commerce logistics problem (ELP), the e‐commerce platform is the shipper generating a large number of online orders between product sellers and buyers, and third‐party logistics (3PL) providers are carriers that can deliver these online orders. This study focuses on the ELP with multiple attributes (ELP‐MA), which is generally the problem of matching the shipper's online orders and 3PL providers given that price and other attributes are jointly evaluated. We develop a one‐sided Vickrey–Clarke–Groves (O‐VCG) auction for the ELP‐MA. The O‐VCG auction leads to incentive compatibility (on the sell side), allocative efficiency, budget balance, and individual rationality. We next introduce the concept of universally unsatisfied set to construct a primal‐dual algorithm, also called the primal‐dual Vickrey (PDV) auction. We prove that the O‐VCG auction can be viewed as a single‐attribute multi‐unit forward Vickrey (SA‐MFV) auction. Both PDV and SA‐MFV auctions realize VCG payments and truthful bidding for general valuations. This result reveals the underlying link not only between single‐attribute and multi‐attribute auctions, but between static and dynamic auctions in a multi‐attribute setting.  相似文献   

7.
This paper proposes a general approach and a computationally convenient estimation procedure for the structural analysis of auction data. Considering first‐price sealed‐bid auction models within the independent private value paradigm, we show that the underlying distribution of bidders' private values is identified from observed bids and the number of actual bidders without any parametric assumptions. Using the theory of minimax, we establish the best rate of uniform convergence at which the latent density of private values can be estimated nonparametrically from available data. We then propose a two‐step kernel‐based estimator that converges at the optimal rate.  相似文献   

8.
In many financial markets, dealers have the advantage of observing the orders of their customers. To quantify the economic benefit that dealers derive from this advantage, we study detailed data from Canadian Treasury auctions, where dealers observe customer bids while preparing their own bids. In this setting, dealers can use information on customer bids to learn about (i) competition, that is, the distribution of competing bids in the auction, and (ii) fundamentals, that is, the ex post value of the security being auctioned. We devise formal hypothesis tests for both sources of informational advantage. In our data, we do not find evidence that dealers are learning about fundamentals. We find that the “information about competition” contained in customer bids accounts for 13–27% of dealers' expected profits.  相似文献   

9.
Customer service is a key component of a firm's value proposition and a fundamental driver of differentiation and competitive advantage in nearly every industry. Moreover, the relentless coevolution of service opportunities with novel and more powerful information technologies has made this area exciting for academic researchers who can contribute to shaping the design and management of future customer service systems. We engage in interdisciplinary research—across information systems, marketing, and computer science—in order to contribute to the service design and service management literature. Grounded in the design‐science perspective, our study leverages marketing theory on the service‐dominant logic and recent findings pertaining to the evolution of customer service systems. Our theorizing culminates with the articulation of four design principles. These design principles underlie the emerging class of customer service systems that, we believe, will enable firms to better compete in an environment characterized by an increase in customer centricity and in customers' ability to self‐serve and dynamically assemble the components of solutions that fit their needs. In this environment, customers retain control over their transactional data, as well as the timing and mode of their interactions with firms, as they increasingly gravitate toward integrated complete customer solutions rather than single products or services. Guided by these design principles, we iterated through, and evaluated, two instantiations of the class of systems we propose, before outlining implications and directions for further cross‐disciplinary scholarly research.  相似文献   

10.
《Risk analysis》2018,38(2):410-424
This article proposes a rigorous mathematical approach, named a reliability‐based capability approach (RCA), to quantify the societal impact of a hazard. The starting point of the RCA is a capability approach in which capabilities refer to the genuine opportunities open to individuals to achieve valuable doings and beings (such as being mobile and being sheltered) called functionings. Capabilities depend on what individuals have and what they can do with what they have. The article develops probabilistic predictive models that relate the value of each functioning to a set of easily predictable or measurable quantities (regressors) in the aftermath of a hazard. The predicted values of selected functionings for an individual collectively determine the impact of a hazard on his/her state of well‐being. The proposed RCA integrates the predictive models of functionings into a system reliability problem to determine the probability that the state of well‐being is acceptable, tolerable, or intolerable. Importance measures are defined to quantify the contribution of each functioning to the state of well‐being. The information from the importance measures can inform decisions on optimal allocation of limited resources for risk mitigation and management.  相似文献   

11.
We study European banks' demand for short‐term funds (liquidity) during the summer 2007 subprime market crisis. We use bidding data from the European Central Bank's auctions for one‐week loans, their main channel of monetary policy implementation. Our analysis provides a high‐frequency, disaggregated perspective on the 2007 crisis, which was previously studied through comparisons of collateralized and uncollateralized interbank money market rates which do not capture the heterogeneous impact of the crisis on individual banks. Through a model of bidding, we show that banks' bids reflect their cost of obtaining short‐term funds elsewhere (e.g., in the interbank market) as well as a strategic response to other bidders. The strategic response is empirically important: while a naïve interpretation of the raw bidding data may suggest that virtually all banks suffered an increase in the cost of short‐term funding, we find that, for about one third of the banks, the change in bidding behavior was simply a strategic response. We also find considerable heterogeneity in the short‐term funding costs among banks: for over one third of the bidders, funding costs increased by more than 20 basis points, and funding costs vary widely with respect to the country‐of‐origin. The funding costs we estimate using bidding data are also predictive of market‐ and accounting‐based measures of bank performance, reinforcing the usefulness of “revealed preference” information contained in bids.  相似文献   

12.
Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single‐hit dose‐response models are the most commonly used dose‐response models in QMRA. Denoting as the probability of infection at a given mean dose d, a three‐parameter generalized QMRA beta‐Poisson dose‐response model, , is proposed in which the minimum number of organisms required for causing infection, Kmin, is not fixed, but a random variable following a geometric distribution with parameter . The single‐hit beta‐Poisson model, , is a special case of the generalized model with Kmin = 1 (which implies ). The generalized beta‐Poisson model is based on a conceptual model with greater detail in the dose‐response mechanism. Since a maximum likelihood solution is not easily available, a likelihood‐free approximate Bayesian computation (ABC) algorithm is employed for parameter estimation. By fitting the generalized model to four experimental data sets from the literature, this study reveals that the posterior median estimates produced fall short of meeting the required condition of = 1 for single‐hit assumption. However, three out of four data sets fitted by the generalized models could not achieve an improvement in goodness of fit. These combined results imply that, at least in some cases, a single‐hit assumption for characterizing the dose‐response process may not be appropriate, but that the more complex models may be difficult to support especially if the sample size is small. The three‐parameter generalized model provides a possibility to investigate the mechanism of a dose‐response process in greater detail than is possible under a single‐hit model.  相似文献   

13.
14.
The bold lines that have separated the application of specific production planning and control techniques to specific production systems are being blurred by continuous advances in production technologies and innovative operational procedures. Oral communication among dispatchers and production units has given way to electronic communication between production planners and these units by continuous progress in information technologies. Current production literature alludes to the idea that, collectively, these advances have paved the way for application of Just‐In‐Time (JIT) production concepts, which were originally developed for mass production systems, in intermittent production systems. But this literature does not actually consider the possibility. This article presents a modification to JIT procedures to make them more suitable for jumbled‐flow shops. This article suggests providing real‐time information about net‐requirements for each product to each work center operator for setting production priorities at each work center. Simulation experiments conducted for this study show that using Net‐Requirements in JIT (NERJIT) reduces customer wait time by 45–60% while reducing inventory slightly. The analysis of work centers’ input and output stock‐point inventories shows that using the information about net‐requirements results in production of items that are in current demand. NERJIT results in smaller input stock‐point inventory and availability of products with higher priority in the output stock‐points of work centers.  相似文献   

15.
This paper extends the long‐term factorization of the stochastic discount factor introduced and studied by Alvarez and Jermann (2005) in discrete‐time ergodic environments and by Hansen and Scheinkman (2009) and Hansen (2012) in Markovian environments to general semimartingale environments. The transitory component discounts at the stochastic rate of return on the long bond and is factorized into discounting at the long‐term yield and a positive semimartingale that extends the principal eigenfunction of Hansen and Scheinkman (2009) to the semimartingale setting. The permanent component is a martingale that accomplishes a change of probabilities to the long forward measure, the limit of T‐forward measures. The change of probabilities from the data‐generating to the long forward measure absorbs the long‐term risk‐return trade‐off and interprets the latter as the long‐term risk‐neutral measure.  相似文献   

16.
We consider a multi‐stage inventory system with stochastic demand and processing capacity constraints at each stage, for both finite‐horizon and infinite‐horizon, discounted‐cost settings. For a class of such systems characterized by having the smallest capacity at the most downstream stage and system utilization above a certain threshold, we identify the structure of the optimal policy, which represents a novel variation of the order‐up‐to policy. We find the explicit functional form of the optimal order‐up‐to levels, and show that they depend (only) on upstream echelon inventories. We establish that, above the threshold utilization, this optimal policy achieves the decomposition of the multidimensional objective cost function for the system into a sum of single‐dimensional convex functions. This decomposition eliminates the curse of dimensionality and allows us to numerically solve the problem. We provide a fast algorithm to determine a (tight) upper bound on this threshold utilization for capacity‐constrained inventory problems with an arbitrary number of stages. We make use of this algorithm to quantify upper bounds on the threshold utilization for three‐, four‐, and five‐stage capacitated systems over a range of model parameters, and discuss insights that emerge.  相似文献   

17.
We create an analytical structure that reveals the long‐run risk‐return relationship for nonlinear continuous‐time Markov environments. We do so by studying an eigenvalue problem associated with a positive eigenfunction for a conveniently chosen family of valuation operators. The members of this family are indexed by the elapsed time between payoff and valuation dates, and they are necessarily related via a mathematical structure called a semigroup. We represent the semigroup using a positive process with three components: an exponential term constructed from the eigenvalue, a martingale, and a transient eigenfunction term. The eigenvalue encodes the risk adjustment, the martingale alters the probability measure to capture long‐run approximation, and the eigenfunction gives the long‐run dependence on the Markov state. We discuss sufficient conditions for the existence and uniqueness of the relevant eigenvalue and eigenfunction. By showing how changes in the stochastic growth components of cash flows induce changes in the corresponding eigenvalues and eigenfunctions, we reveal a long‐run risk‐return trade‐off.  相似文献   

18.
One of the important objectives of supply chain S&OP (Sales and Operations Planning) is the profitable alignment of customer demand with supply chain capabilities through the coordinated planning of sales, production, distribution, and procurement. In the make‐to‐order manufacturing context considered in this paper, sales plans cover both contract and spot sales, and procurement plans require the selection of supplier contracts. S&OP decisions also involve the allocation of capacity to support sales plans. This article studies the coordinated contract selection and capacity allocation problem, in a three‐tier manufacturing supply chain, with the objective to maximize the manufacturer's profitability. Using a modeling approach based on stochastic programming with recourse, we show how these S&OP decisions can be made taking into account economic, market, supply, and system uncertainties. The research is based on a real business case in the Oriented Strand Board (OSB) industry. The computational results show that the proposed approach provides realistic and robust solutions. For the case considered, the planning method elaborated yields significant performance improvements over the solutions obtained from the mixed integer programming model previously suggested for S&OP.  相似文献   

19.
When two parties have different prior beliefs about some future event, they can realize gains through speculative trade. Can these gains be realized when the parties' prior beliefs are not common knowledge? We examine a simple example in which two parties having heterogeneous prior beliefs, independently drawn from some distribution, bet on what future action one of them will choose. We define a notion of “constrained interim‐efficient” best and ask whether they can be implemented in Bayesian equilibrium by some mechanism. Our main result establishes that as the costs of unilaterally manipulating the bet's outcome become more symmetric across states, implementation becomes easier. In particular, when these costs are equal in both states, implementation is possible for any distribution.  相似文献   

20.
This article proposes, develops, and illustrates the application of level‐k game theory to adversarial risk analysis. Level‐k reasoning, which assumes that players play strategically but have bounded rationality, is useful for operationalizing a Bayesian approach to adversarial risk analysis. It can be applied in a broad class of settings, including settings with asynchronous play and partial but incomplete revelation of early moves. Its computational and elicitation requirements are modest. We illustrate the approach with an application to a simple defend‐attack model in which the defender's countermeasures are revealed with a probability less than one to the attacker before he decides on how or whether to attack.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号