首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
By maximizing the expected utility, we study the optimal allocation of policy limits and deductibles from the viewpoint of a policyholder, where the dependence structure of losses is unknown. In Cheung (2007) [K.C. Cheung, Optimal allocation of policy limits and deductibles, Insurance: Mathematics and Economics 41 (2007) 382-391], the author had considered similar problems. He supposed that a policyholder was exposed to n random losses, and the losses were general risks there, i.e., the loss on each policy was just a random variable. In this paper, the model is extended in two directions. On one hand, we assume that n policies of the n losses are effected by random environments. For each policy, the loss under a fixed environment is characterized by a random variable, so the loss on each policy is a mixture of some fundamental random variables. On the other hand, loss frequencies, which are stochastic, are also considered. Therefore, the whole model is equipped with mixture risks and discount factors. Finally, we get the orderings of the optimal allocations of policy limits and deductibles. Our conclusions also extend the main results in Hua and Cheung (2008) [L. Hua, K.C. Cheung, Stochastic orders of scalar products with applications, Insurance: Mathematics and Economics 42 (2008) 865-872].  相似文献   

2.
In the literature, orderings of optimal allocations of policy limits and deductibles were established by maximizing the expected utility of wealth of the policyholder. In this paper, by applying the bivariate characterizations of stochastic ordering relations, we reconsider the same model and derive some new refined results on orderings of optimal allocations of policy limits and deductibles with respect to the family of distortion risk measures from the viewpoint of the policyholder. Both loss severities and loss frequencies are considered. Special attention is given to the optimization criteria of the family of distortion risk measures with concave distortions and with only increasing distortions. Most of the results presented in this paper can be applied to some particular distortion risk measures. The results complement and extend the main results in Cheung [Cheung, K.C., 2007. Optimal allocation of policy limits and deductibles. Insurance: Mathematics and Economics 41, 291-382] and Hua and Cheung [Hua, L., Cheung, K.C., 2008a. Stochastic orders of scalar products with applications. Insurance: Mathematics and Economics 42, 865-872].  相似文献   

3.
This paper has two parts. In the first, we apply the Heath–Jarrow–Morton (HJM) methodology to the modelling of longevity bond prices. The idea of using the HJM methodology is not new. We can cite Cairns et al. [Cairns A.J., Blake D., Dowd K, 2006. Pricing death: framework for the valuation and the securitization of mortality risk. Astin Bull., 36 (1), 79–120], Miltersen and Persson [Miltersen K.R., Persson S.A., 2005. Is mortality dead? Stochastic force of mortality determined by arbitrage? Working Paper, University of Bergen] and Bauer [Bauer D., 2006. An arbitrage-free family of longevity bonds. Working Paper, Ulm University]. Unfortunately, none of these papers properly defines the prices of the longevity bonds they are supposed to be studying. Accordingly, the main contribution of this section is to describe a coherent theoretical setting in which we can properly define these longevity bond prices. A second objective of this section is to describe a more realistic longevity bonds market model than in previous papers. In particular, we introduce an additional effect of the actual mortality on the longevity bond prices, that does not appear in the literature. We also study multiple term structures of longevity bonds instead of the usual single term structure. In this framework, we derive a no-arbitrage condition for the longevity bond financial market. We also discuss the links between such HJM based models and the intensity models for longevity bonds such as those of Dahl [Dahl M., 2004. Stochastic mortality in life insurance: Market reserves and mortality-linked insurance contracts, Insurance: Math. Econom. 35 (1) 113–136], Biffis [Biffis E., 2005. Affine processes for dynamic mortality and actuarial valuations. Insurance: Math. Econom. 37, 443–468], Luciano and Vigna [Luciano E. and Vigna E., 2005. Non mean reverting affine processes for stochastic mortality. ICER working paper], Schrager [Schrager D.F., 2006. Affine stochastic mortality. Insurance: Math. Econom. 38, 81–97] and Hainaut and Devolder [Hainaut D., Devolder P., 2007. Mortality modelling with Lévy processes. Insurance: Math. Econom. (in press)], and suggest the standard pricing formula of these intensity models could be extended to more general settings.In the second part of this paper, we study the asset allocation problem of pure endowment and annuity portfolios. In order to solve this problem, we study the “risk-minimizing” strategies of such portfolios, when some but not all longevity bonds are available for trading. In this way, we introduce different basis risks.  相似文献   

4.
In this paper, we provide a new insight to the previous work of Briys and de Varenne [E. Briys, F. de Varenne, Life insurance in a contingent claim framework: Pricing and regulatory implications, Geneva Papers on Risk and Insurance Theory 19 (1) (1994) 53–72], Grosen and Jørgensen [A. Grosen, P.L. Jørgensen, Life insurance liabilities at market value: An analysis of insolvency risk, bonus policy, and regulatory intervention rules in a barrier option framework, Journal of Risk and Insurance 69 (1) (2002) 63–91] and Chen and Suchanecki [A. Chen, M. Suchanecki, Default risk, bankruptcy procedures and the market value of life insurance liabilities, Insurance: Mathematics and Economics 40 (2007) 231–255]. We show that the particular risk management strategy followed by the insurance company can significantly change the risk exposure of the company, and that it should thus be taken into account by regulators. We first study how the regulator establishes regulation intervention levels in order to control for instance the default probability of the insurance company. This part of the analysis is based on a constant volatility. Given that the insurance company is informed of regulatory rules, we study how results can be significantly different when the insurance company follows a risk management strategy with non-constant volatilities. We thus highlight some limits of the prior literature and believe that the risk management strategy of the company should be taken into account in the estimation of the risk exposure as well as in that of the market value of liabilities.  相似文献   

5.
6.
In this paper we study the problem of simultaneous minimization of risks, and maximization of the terminal value of expected funds assets in a stochastic defined benefit aggregated pension plan. The risks considered are the solvency risk, measured as the variance of the terminal fund’s level, and the contribution risk, in the form of a running cost associated to deviations from the evolution of the stochastic normal cost. The problem is formulated as a bi-objective stochastic problem of mean–variance and it is solved with dynamic programming techniques. We find the efficient frontier and we show that the optimal portfolio depends linearly on the supplementary cost of the fund, plus an additional term due to the random evolution of benefits.  相似文献   

7.
We construct an algorithm which provides in finite steps the stable coalition structure(s) of tree-graph communication games and an allocation of the core: the restricted marginal contribution allocation. This paper has been presented at the St. Petersburg Institute for Economics and Mathematics (Russian Academy of Sciences), University of Santiago de Compostela (International Workshop on Game Theory), Universidad Autónoma de Barcelona, and Universidad de Sevilla. This research has been supported partially by: DGICYT PB94-1372 and UPV 035.321-HB146/96  相似文献   

8.
In this paper, we study the expected value of a discounted penalty function at ruin of the classical surplus process modified by the inclusion of interest on the surplus. The ‘penalty’ is simply a function of the surplus immediately prior to ruin and the deficit at ruin. An integral equation for the expected value is derived, while the exact solution is given when the initial surplus is zero. Dickson’s [Insurance: Mathematics and Economics 11 (1992) 191] formulae for the distribution of the surplus immediately prior to ruin in the classical surplus process are generalised to our modified surplus process.  相似文献   

9.
In this paper we deal with contribution rate and asset allocation strategies in a pre-retirement accumulation phase. We consider a single cohort of workers and investigate a retirement plan of a defined benefit type in which an accumulated fund is converted into a life annuity. Due to the random evolution of a mortality intensity, the future price of an annuity, and as a result, the liability of the fund, is uncertain. A manager has control over a contribution rate and an investment strategy and is concerned with covering the random claim. We consider two mean-variance optimization problems, which are quadratic control problems with an additional constraint on the expected value of the terminal surplus of the fund. This functional objectives can be related to the well-established financial theory of claim hedging. The financial market consists of a risk-free asset with a constant force of interest and a risky asset whose price is driven by a Lévy noise, whereas the evolution of a mortality intensity is described by a stochastic differential equation driven by a Brownian motion. Techniques from the stochastic control theory are applied in order to find optimal strategies.  相似文献   

10.
研究了确定缴费型养老基金在退休前累积阶段的最优资产配置问题.假设养老基金管理者将养老基金投资于由一个无风险资产和一个价格过程满足Stein-Stein随机波动率模型的风险资产所构成的金融市场.利用随机最优控制方法,以最大化退休时刻养老基金账户相对财富的期望效用为目标,分别获得了无约束情形和受动态VaR (Value at Risk)约束情形下该养老基金的最优投资策略,并获得相应最优值函数的解析表达形式.最后通过数值算例对相关理论结果进行数值验证并考察了最优投资策略关于相关参数的敏感性.  相似文献   

11.
This paper extends the model and analysis in that of Vandaele and Vanmaele [Insurance: Mathematics and Economics, 2008, 42: 1128–1137]. We assume that parameters of the Lévy process which models the dynamic of risky asset in the financial market depend on a finite state Markov chain. The state of the Markov chain can be interpreted as the state of the economy. Under the regime switching Lévy model, we obtain the locally risk-minimizing hedging strategies for some unit-linked life insurance products, including both the pure endowment policy and the term insurance contract.  相似文献   

12.
We present a semilocal convergence theorem for Newton’s method (NM) on spaces with a convergence structure. Using our new idea of recurrent functions, we provide a tighter analysis, with weaker hypotheses than before and with the same computational cost as for Argyros (1996, 1997, 1997, 2007) [1], [2], [3] and [5], Meyer (1984, 1987, 1992) [13], [14] and [15]. Numerical examples are provided for solving equations in cases not covered before.  相似文献   

13.
In this paper we propose some moment matching pricing methods for European-style discrete arithmetic Asian basket options in a Black & Scholes framework. We generalize the approach of [M. Curran, Valuing Asian and portfolio by conditioning on the geometric mean price, Management Science 40 (1994) 1705-1711] and of [G. Deelstra, J. Liinev, M. Vanmaele, Pricing of arithmetic basket options by conditioning, Insurance: Mathematics & Economics 34 (2004) 55-57] in several ways. We create a framework that allows for a whole class of conditioning random variables which are normally distributed. We moment match not only with a lognormal random variable but also with a log-extended-skew-normal random variable. We also improve the bounds of [G. Deelstra, I. Diallo, M. Vanmaele, Bounds for Asian basket options, Journal of Computational and Applied Mathematics 218 (2008) 215-228]. Numerical results are included and on the basis of our numerical tests, we explain which method we recommend depending on moneyness and time-to-maturity.  相似文献   

14.
This paper investigates an optimal investment strategy of DC pension plan in a stochastic interest rate and stochastic volatility framework. We apply an affine model including the Cox–Ingersoll–Ross (CIR) model and the Vasicek mode to characterize the interest rate while the stock price is given by the Heston’s stochastic volatility (SV) model. The pension manager can invest in cash, bond and stock in the financial market. Thus, the wealth of the pension fund is influenced by the financial risks in the market and the stochastic contribution from the fund participant. The goal of the fund manager is, coping with the contribution rate, to maximize the expectation of the constant relative risk aversion (CRRA) utility of the terminal value of the pension fund over a guarantee which serves as an annuity after retirement. We first transform the problem into a single investment problem, then derive an explicit solution via the stochastic programming method. Finally, the numerical analysis is given to show the impact of financial parameters on the optimal strategies.  相似文献   

15.
In recent years, a large number of research papers and monographs on the analysis of hedge fund returns have been published. Typically, the authors of these studies implicitly or explicitly treat monthly returns of hedge funds as independent and identically distributed observations. The Hedge Fund Index might be able to serve that role. But the returns of an individual hedge fund are not like that. They behave autoregressively depending on the time periods. This stochastic behavior should be modeled as a combined/regime switching stochastic process of two processes: i.i.d. process and autoregressive process. This paper first depicts the autoregressiveness of hedge fund returns. Then we introduce our statistical model for returns of an individual hedge fund and then, with our retrospective view, we perform several data analyses for individual hedge funds’ return data.  相似文献   

16.
This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren’t considered in that paper. In this paper, we improve Hibiki’s model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.  相似文献   

17.
In this paper, we study stochastic orders of scalar products of random vectors. Based on the study of Ma [Ma, C., 2000. Convex orders for linear combinations of random variables. J. Statist. Plann. Inference 84, 11-25], we first obtain more general conditions under which linear combinations of random variables can be ordered in the increasing convex order. As an application of this result, we consider the scalar product of two random vectors which separates the severity effect and the frequency effect in the study of the optimal allocation of policy limits and deductibles. Finally, we obtain the ordering of the optimal allocation of policy limits and deductibles when the dependence structure of the losses is unknown. This application is a further study of Cheung [Cheung, K.C., 2007. Optimal allocation of policy limits and deductibles. Insurance: Math. Econom. 41, 382-391].  相似文献   

18.
Using stochastic modelling, we demonstrate that the best investmentstrategy for the accumulation phase of a defined contributionpension plan is one that limits the range of returns that arecredited to the plan member's account. In particular, we showthat with-profit accumulation programmes which make use of asmoothing fund to smooth out returns over time dominate unit-linkedaccumulation programmes. However, for the distribution phase,we show that it is hard in practice for an investment-linkeddistribution programme to beat the income and security providedby a standard annuity, although we again find that, by avoidingextremely poor outcomes, with-profit distribution programmesdominate unit-linked distribution programmes. Return smoothingby means of a smoothing fund is therefore a valuable featureof any long-term investment programme both during the accumulationand distribution phases.  相似文献   

19.
We prove the weighted Strichartz estimates for the wave equation in even space dimensions with radial symmetry in space. Although the odd space dimensional cases have been treated in our previous paper [5], the lack of the Huygens principle prevents us from a similar treatment in even space dimensions. The proof is based on the two explicit representations of solutions due to Rammaha [11] and Takamura [14] and to Kubo-Kubota [6]. As in the odd space dimensional cases [5], we are also able to construct self-similar solutions to semilinear wave equations on the basis of the weighted Strichartz estimates.Mathematics Subject Classification (2000): 35L05, 35B45, 35L70COE fellowDedicated to Professor Mitsuru Ikawa on the occasion of his sixtieth birthday  相似文献   

20.
Sometimes a complex stochastic decision system undertakes multiple tasks called events, and the decision-maker wishes to maximize the chance functions which are defined as the probabilities of satisfying these events. Originally introduced by Liu and Iwamura [B. Liu, K. Iwamura, Modelling stochastic decision systems using dependent-chance programming, European Journal of Operational Research 101 (1997) 193–203], dependent-chance programming is aimed at maximizing some chance functions of events in an uncertain environment. In this work, we show that the original dependent chance-programming framework needs to be extended in order to capture an exact reliability measure for a given plan.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号