首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
自从Suykens提出新型统计理论学习方法-最小二乘支持向量机(LSSVM)以来,这种方法引起了广泛的关注,它在预测方面的良好性能得到了广泛应用.应用自组织数据挖掘(GMDH)方法改进LSSVM,提升了预测精度.首先利用GMDH方法选择有效的输入变量,再将这些变量作为LSSVM模型的输入,进行时间序列的预测,从而建立LSSVM和GMDH组合的混合模型GLSSVM.并通过汇率时间序列对本文模型进行了实证.结果表明,混合模型预测精度得到了明显的提高.  相似文献   

2.
研究实值扩展负相依(END)不确定变量中部分偏差和的同题.在不确定变量期望定义和性质的基础上,利用变量之间实值负相依性和不确定性变量的性质,得到了不确定变量偏差和的上确界性质,并将其应用于风险理论,得到了较好的结果.  相似文献   

3.
基于ARIMA和LSSVM的非线性集成预测模型   总被引:1,自引:0,他引:1  
针对复杂时间序列预测困难的问题,在综合考虑线性与非线性复合特征的基础上,提出一种基于ARIMA和最小二乘支持向量机(LSSVM)的非线性集成预测方法.首先采用ARIMA模型进行时间序列线性趋势建模,并为LSSVM建模确定输入阶数;接着根据确定的输入阶数进行时间序列样本重构,采用LSSVM模型进行时间序列非线性特征建模;最后采用基于LSSVM的非线性集成技术形成一个综合的预测结果.将该方法用于中国GDP预测取得的结果,与单独预测方法及流行的其他集成预测方法相比,预测精度有了较大的提高,从而验证了方法的有效性和可行性.  相似文献   

4.
基于EMD-GA-BP与EMD-PSO-LSSVM的中国碳市场价格预测   总被引:1,自引:0,他引:1       下载免费PDF全文
由于碳交易市场价格的波动性大及相互影响关系的复杂性,本文试图构建碳价格长期和短期的最优预测模型。考虑到碳交易价格波动的趋势性和周期性特点,基于经验模态分解算法(EMD)、遗传算法(GA)—神经网络(BP)模型、粒子群算法(PSO)—最小二乘支持向量机(LSSVM)模型及由它们构建的组合预测模型,对中国碳市场交易价格进行短期预测和长期预测。实证分析中将影响碳交易价格的不同宏观经济因素和碳价格时间序列因素做为输入变量,分别代入组合模型进行预测。研究结果表明,在短期预测中,EMD-GA-BP模型预测效果优于GA-BP模型和PSO-LSSVM模型;而在长期预测中,组合模型EMD-PSO-LSSVM模型预测效果优于只考虑碳价格波动趋势性或周期性预测效果。  相似文献   

5.
航材备件是保障航空装备日常训练和作战正常使用的重要影响因素,针对部分航材备件样本数据量少,影响因素多且复杂多变,预测结果与装备系统完好性要求偏差较大等问题.建立基于灰色关联分析(GRA)与偏最小二乘(PLS)及最小二乘向量机(LSSVM)相结合的航材备件预测模型,采集某无人机航材备件数据,通过对统计数据进行灰色关联分析,提取航材备件需求的相关因素作为模型训练样本,确定关键因素,利用偏最小二乘对关键因素特征提取,然后将偏最小二乘特征提取后的数据作为最小二乘向量机输入,进行模型构建及分析.通过实验验证了该方法的可行性与适用性,能够满足无人机航材备件预测的实际需要.  相似文献   

6.
研究了属性权重完全未知,方案属性值和偏好值均为语言变量的多属性决策问题.首先,通过分析相关文献中利用方案属性值与偏好值之间的偏差得到属性权重的不合理性,在最小化方案综合属性值与偏好值的偏差的基础上,建立了一个求解属性权重的规划模型.其次,在各方案的属性值与属性正理想点的偏差最小的基础上,又建立一个求解属性权重的规划模型.第三,在综合考虑各属性下所有决策方案总的组合偏差之和最小的基础上,将上述两个规划模型相结合,得到了一个反映出决策者对两种不同信息的偏好程度的求解属性权重的规划模型,得到了语言多属性决策的一种组合方法.最后,通过实例说明方法的可行性与有效性.  相似文献   

7.
文章将线性混合效应模型(LME)推广至线性混合张量模型.首先建立单变量线性混合张量模型,并推广至多变量线性混合张量模型.利用矩阵向量化方法,结合矩阵函数的导数运算和矩阵的Kronecker积,得出LMEM模型的参数估计特别是方差参数矩阵的估计式,最后给出线性混合张量模型的VLS参数估计式.  相似文献   

8.
考虑到诸如金融危机等重大事件的影响,时间序列可能存在异常值,提出了一个基于局部异常因子(LOF)的LOF-SSA-LSSVM预测模型,并将其应用于广州港集装箱吞吐量预测.首先,对原始时间序列进行X12加法季节分解,对于分解得到的不规则序列,采用LOF算法进行异常值检测,确定异常数据的位置,之后通过插值或最小二乘支持向量机(LSSVM)的预测值来修正原始季节调整序列中的异常值,将修正后的季节调整序列与季节因子序列加和,即得到新的待预测序列.预测阶段,先采用奇异谱分析(SSA)将新的待预测序列进行分解重构,剔除序列中的噪声,然后再采用LSSVM对其进行预测.实证结果表明,建立的LOF-SSA-LSSVM模型相比BP、ARIMA等模型有着更好的预测精度.  相似文献   

9.
在抽样估计中,当研究变量与辅助变量之间呈非线性关系时,传统的校准估计方法效果较差,基于非参数回归方法的模型校准估计量则可以很好地解决这一问题。首先,建立描述研究变量和辅助变量之间关系的超总体回归模型,使用非参数中的局部多项式方法得出模型参数的拟合值,并结合校准估计得出局部多项式模型校准估计量,同时给出其方差和方差估计量公式,证明了该估计量具有渐近无偏性、一致性和渐近正态性等优良的统计性质。然后,使用仿真模拟的方法证明在研究变量与研究变量之间呈非线性关系时,该估计量有良好的估计效果。最后,对该估计量在我国政府统计中的应用进行简单的介绍。  相似文献   

10.
本文研究了实值扩展负相依(END)一致变尾随机变量的中偏差.在给出部分和中偏差的基础上,得到了一定条件下随机和中偏差成立的充分必要条件.  相似文献   

11.
为解决最小二乘支持向量机参数设置的盲目性,利用果蝇优化算法对其参数进行优化选择,进而构建了果蝇优化最小二乘支持向量机混合预测模型.以我国物流需求量预测为例,验证了该模型的可行性和有效性.实例验证结果表明:与单一最小二乘支持向量机和模拟退火算法优化最小二乘支持向量机预测模型相比,该模型不仅能够有效选择参数值,而且预测精度更高.  相似文献   

12.
An ellipsoidal frontier model: Allocating input via parametric DEA   总被引:1,自引:0,他引:1  
This paper presents the ellipsoidal frontier model (EFM), a parametric data envelopment analysis (DEA) model for input allocation. EFM addresses the problem of distributing a single total fixed input by assuming the existence of a predefined locus of points that characterizes the DEA frontier. Numeric examples included in the paper show EFM’s capacity to allocate shares of the total fixed input to each DMU so that they will all become efficient. By varying the eccentricities, input distribution can be performed in infinite ways, gaining control over DEA weights assigned to the variables in the model. We also show that EFM assures strong efficiency and behaves coherently within the context of sensitivity analysis, two properties that are not observed in other models found in the technical literature.  相似文献   

13.
Scheduling operations on a farm is considered depending on the available men and machinery and on the influence of the weather on materials (moisture content). A simulation model with a heuristic strategy for selecting operations at each moment of decision based on the state of the system and a linear programming model are used in the grain harvest to demonstrate the influence of the models on the resulting variable costs (overtime, drying of wet grain and timeliness losses of wheat and straw) and the influence of input data (weather, attributes of material and number of workable hours) on those costs. The lower costs found with simplified input (hourly, daily, weekly data) in simulation is continued with the linear programming model due to its deviation from the real workable time constraints and decision variables. Such a tendency suggests that LP-models usual in agricultural planning are too simple.  相似文献   

14.
Data envelopment analysis (DEA) is a non-parametric approach based on linear programming that has been widely applied for evaluating the relative efficiency of a set of homogeneous decision-making units (DMUs) with multiple inputs and outputs. The original DEA models use positive input and output variables that are measured on a ratio scale, but these models do not apply to the variables in which negative data can appear. However, with the widespread use of interval scale data and undesirable data, the emphasis has been directed towards the simultaneous consideration of the positive and negative data in DEA models. In this paper, using the slacks-based measure, we propose an extended model to evaluate the efficiency of DMUs, even if some variables are measured on an interval scale and some on a ratio scale. Moreover, the extended model allows for the presence of all interval-scale variables, which are capable of taking both negative and positive values.  相似文献   

15.
Sensitivity analysis—determination of how prediction variables affect response variables—of individual‐based models (IBMs) are few but important to the interpretation of model output. We present sensitivity analysis of a spatially explicit IBM (HexSim) of a threatened species, the Northern Spotted Owl (NSO; Strix occidentalis caurina) in Washington, USA. We explored sensitivity to HexSim variables representing habitat quality, movement, dispersal, and model architecture; previous NSO studies have well established sensitivity of model output to vital rate variation. We developed “normative” (expected) model settings from field studies, and then varied the values of ≥ 1 input parameter at a time by ±10% and ±50% of their normative values to determine influence on response variables of population size and trend. We determined time to population equilibration and dynamics of populations above and below carrying capacity. Recovery time from small population size to carrying capacity greatly exceeded decay time from an overpopulated condition, suggesting lag time required to repopulate newly available habitat. Response variables were most sensitive to input parameters of habitat quality which are well‐studied for this species and controllable by management. HexSim thus seems useful for evaluating potential NSO population responses to landscape patterns for which good empirical information is available.  相似文献   

16.
This paper develops a new radial super-efficiency data envelopment analysis (DEA) model, which allows input–output variables to take both negative and positive values. Compared with existing DEA models capable of dealing with negative data, the proposed model can rank the efficient DMUs and is feasible no matter whether the input–output data are non-negative or not. It successfully addresses the infeasibility issue of both the conventional radial super-efficiency DEA model and the Nerlove–Luenberger super-efficiency DEA model under the assumption of variable returns to scale. Moreover, it can project each DMU onto the super-efficiency frontier along a suitable direction and never leads to worse target inputs or outputs than the original ones for inefficient DMUs. Additional advantages of the proposed model include monotonicity, units invariance and output translation invariance. Two numerical examples demonstrate the practicality and superiority of the new model.  相似文献   

17.
Multiple imputation (MI) methods have been widely applied in economic applications as a robust statistical way to incorporate data where some observations have missing values for some variables. However in stochastic frontier analysis (SFA), application of these techniques has been sparse and the case for such models has not received attention in the appropriate academic literature. This paper fills this gap and explores the robust properties of MI within the stochastic frontier context. From a methodological perspective, we depart from the standard MI literature by demonstrating, conceptually and through simulation, that it is not appropriate to use imputations of the dependent variable within the SFA modelling, although they can be useful to predict the values of missing explanatory variables. Fundamentally, this is because efficiency analysis involves decomposing a residual into noise and inefficiency and as a result any imputation of a dependent variable would be imputing efficiency based on some concept of average inefficiency in the sample. A further contribution that we discuss and illustrate for the first time in the SFA literature, is that using auxiliary variables (outside of those contained in the SFA model) can enhance the imputations of missing values. Our empirical example neatly articulates that often the source of missing data is only a sub-set of components comprising a part of a composite (or complex) measure and that the other parts that are observed are very useful in predicting the value.  相似文献   

18.
Network equilibrium models are widely used by traffic practitioners to aid them in making decisions concerning the operation and management of traffic networks. The common practice is to test a prescribed range of hypothetical changes or policy measures through adjustments to the input data, namely the trip demands, the arc performance (travel time) functions, and policy variables such as tolls or signal timings. Relatively little use is made, however, of the full implicit relationship between model inputs and outputs inherent in these models. By exploiting the representation of such models as an equivalent optimisation problem, classical results on the sensitivity analysis of non-linear programs may be applied, to produce linear relationships between input data perturbations and model outputs. We specifically focus on recent results relating to the probit Stochastic User Equilibrium (PSUE) model, which has the advantage of greater behavioural realism and flexibility relative to the conventional Wardrop user equilibrium and logit SUE models. The paper goes on to explore four applications of these sensitivity expressions in gaining insight into the operation of road traffic networks. These applications are namely: identification of sensitive, ‘critical’ parameters; computation of approximate, re-equilibrated solutions following a change (post-optimisation); robustness analysis of model forecasts to input data errors, in the form of confidence interval estimation; and the solution of problems of the bi-level, optimal network design variety. Finally, numerical experiments applying these methods are reported.  相似文献   

19.
Grey forecasting models have taken an important role for forecasting energy demand, particularly the GM(1,1) model, because they are able to construct a forecasting model using a limited samples without statistical assumptions. To improve prediction accuracy of a GM(1,1) model, its predicted values are often adjusted by establishing a residual GM(1,1) model, which together form a grey residual modification model. Two main issues should be considered: the sign estimation for a predicted residual and the way the two models are constructed. Previous studies have concentrated on the former issue. However, since both models are usually established in the traditional manner, which is dependent on a specific parameter that is not easily determined, this paper focuses on the latter issue, incorporating the neural-network-based GM(1,1) model into a residual modification model to resolve the drawback. Prediction accuracies of the proposed neural-network-based prediction models were verified using real power and energy demand cases. Experimental results verify that the proposed prediction models perform well in comparison with original ones.  相似文献   

20.
In conventional data envelopment analysis (DEA), measures are classified as either input or output. However, in some real cases there are variables which act as both input and output and are known as flexible measures. Most of the previous suggested models for determining the status of flexible measures are oriented. One important issue of these models is that unlike standard DEA, even under constant returns to scale the input- and output-oriented model may produce different efficiency scores. Also, can be expected a flexible measure is selected as an input variable in one model but an output variable in the other model. In addition, in all of the previous studies did not point to variable returns to scale (VRS), but the VRS assumption is prevailed on many real applications. To deal with these issues, this study proposes a new non-oriented model that not only selects the status of each flexible measure as an input or output but also determines returns to scale status. Then, the aggregate model and an extension with the negative data related to the proposed approach are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号