首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
为复杂的发酵过程建立软测量模型要求模型最好能够给出预测值的置信区间,以便技术人员对发酵过程的真实状况和模型的可靠性进行评估。贝叶斯极限学习机能够在实现预测的同时一并给出预测值的置信区间,因此将其用于发酵过程的软测量建模。然而,实际发酵过程中的输入数据往往带有噪声,贝叶斯极限学习机仅能处理输出含噪声的情况。针对这个问题,提出了输入不确定贝叶斯极限学习机。在原有的贝叶斯推理过程中引入输入不确定性,得到了综合考虑输入输出噪声的模型参数和预测置信区间。最后利用青霉素发酵过程进行仿真验证,建立了产物质量浓度的软测量模型,结果表明该方法预测精度高,得到的预测置信区间包含了所有真实值。  相似文献   

2.
锂离子电池是一个复杂的电化学动态系统,实时准确的健康状态(SOH)估计对电动汽车动力锂电池的维护至关重要,传统建模方法难以实现SOH的在线估算.基于此,从实时评估电池的SOH出发,在增量学习的基础上,选取与电池健康状态相关的指标建立SOH预测模型.考虑到增量学习中的耗时性问题,提出融合滑动窗口技术的HI-DD算法,该算法可以检测概念漂移是否发生,从而指导和确定模型更新位置;设计出HI-DD与AdaBoost.RT结合的模型更新策略,进而提高模型的在线学习性能和预测精度,最后使用CALCE提供的电池老化实验数据对所提出的方法进行验证.结果表明,基于增量学习的HI-DD-AdaBoost.RT预测算法具有较强的在线更新能力和较高的预测精度,能够满足SOH在线预测的实际需求.  相似文献   

3.
田慧欣  王安娜 《控制与决策》2012,27(9):1433-1436
针对软测量建模的特点以及建模过程中存在的主要问题,提出了基于 AdaBoost RT 集成学习方法的软测量建模方法,并根据 AdaBoost RT 算法固有的不足和软测量模型在线更新所面临的困难,提出了自适应修改阈值 和增添增量学习性能的改进方法.使用该建模方法对宝钢300 t LF 精炼炉建立钢水温度软测量模型,并使用实际生产数据对模型进行了检验.检验结果表明,该模型具有较好的预测精度,能够很好地实现在线更新.  相似文献   

4.
基于更新样本智能识别算法的自适应集成建模   总被引:1,自引:0,他引:1  
汤健  柴天佑  刘卓  余文  周晓杰 《自动化学报》2016,42(7):1040-1052
选择表征建模对象特性漂移的新样本对软测量模型进行自适应更新,能够降低模型复杂度和运行消耗,提高模型可解释性和预测精度.针对新样本近似线性依靠程度(Approximate linear dependence, ALD)和预测误差(Prediction error, PE)等指标只能片面反映建模对象的漂移程度,领域专家结合具体工业过程需要依据上述指标和自身积累经验进行更新样本的有效识别等问题,本文提出了基于更新样本智能识别算法的自适应集成建模策略.首先,基于历史数据离线建立基于改进随机向量泛函连接网络(Improved random vector functional-link networks, IRVFL)的选择性集成模型;然后,基于集成子模型对新样本进行预测输出后采用在线自适应加权算法(On-line adaptive weighting fusion, OLAWF)对集成子模型权重进行更新,实现在线测量阶段对建模对象特性变化的动态自适应;接着基于领域专家知识构建模糊推理模型对新样本相对ALD(Relative ALD, RALD)值和相对PE(Relative PE, RPE)值进行融合,实现更新样本智能识别,构建新的建模样本库;最后实现集成模型的在线自适应更新.采用合成数据仿真验证了所提算法的合理性和有效性.  相似文献   

5.
The biological treatment process in a wastewater treatment system is a very complex process. The efficiency of the treatment is usually measured by laboratory tests, which typically take five days. In this paper, a time-delay neural network (TDNN) modeling method is proposed for predicting the treatment results. As the first step, a sensitivity analysis performed on a multi-layer perceptron (MLP) network model is used to reduce the input dimensions of the model. Then a TDNN model is further used to improve the performance of the original MLP network model. Subsequently, an on-line prediction and model-updating strategy is proposed and implemented. Simulations using industrial process data show that the prediction accuracy can be improved by the on-line model updating.  相似文献   

6.
Online measurement of the average particle size is typically unavailable in industrial cobalt oxalate synthesis process, soft sensor prediction of the important quality variable is therefore required. Cobalt oxalate synthesis process is a complex multivariable and highly nonlinear process. In this paper, an effective soft sensor based on least squares support vector regression (LSSVR) with dual updating is developed for prediction the average particle size. In this soft sensor model, the methods of moving window LSSVR (MWLSSVR) updating and the model output offset updating is activated based on model performance assessment. Feasibility and efficiency of the proposed soft sensor are demonstrated through the application to an industrial cobalt oxalate synthesis process.  相似文献   

7.
This paper presents the application of soft computing techniques for strength prediction of heat-treated extruded aluminium alloy columns failing by flexural buckling. Neural networks (NN) and genetic programming (GP) are presented as soft computing techniques used in the study. Gene-expression programming (GEP) which is an extension to GP is used. The training and test sets for soft computing models are obtained from experimental results available in literature. An algorithm is also developed for the optimal NN model selection process. The proposed NN and GEP models are presented in explicit form to be used in practical applications. The accuracy of the proposed soft computing models are compared with existing codes and are found to be more accurate.  相似文献   

8.
陈伟  陈继明 《计算机应用》2016,36(4):914-917
针对如何分配一个未来一段时间内满足QoS要求的云服务和感知可能将要发生的QoS违规的问题,提出一种基于时间序列预测方法的云服务QoS预测方法。该预测方法利用改进的贝叶斯常均值(IBCM)模型,能够准确地预测云服务未来一段时间内的QoS状态。实验通过搭建Hadoop集群模拟云平台并收集了响应时间和吞吐量两种QoS属性的数据作为预测对象,实验结果表明:相比自回归积分滑动平均(ARIMA)模型和贝叶斯常均值折扣模型等时间序列预测方法,基于改进的贝叶斯常均值模型的云服务QoS预测方法的平方和误差(SSE)、平均绝对误差(MAE)、均方误差(MSE)和和平均绝对百分比误差(MAPE)均比前两者小一个数量级,因此具有更高的预测精度;同时预测结果对比图说明提出的预测方法具有更好的拟合效果。  相似文献   

9.
10.
基于改进极限学习机的软测量建模方法   总被引:1,自引:1,他引:0  
针对生物发酵过程中一些生物参量难以测量的问题,提出一种基于改进极限学习机(IELM)的软测量建模方法。该方法通过最小二乘方法和误差反馈原理计算出最优的网络输入到隐含层的学习参数,以提高模型的稳定性和预测精度。通过双对角化方法计算出最优的输出权值,解决输出矩阵的病态问题,进一步提高模型的稳定性。将所提方法应用于红霉素发酵过程生物量浓度的软测量。结果表明,与ELM、PL-ELM、IRLS-ELM软测量建模方法相比,IELM在线软测量建模方法具有更高的预测精度和更强的泛化能力。  相似文献   

11.
In industrial process control, measuring some variables is difficult for environmental or cost reasons. This necessitates employing a soft sensor to predict these variables by using the collected data from easily measured variables. The prediction accuracy and computational speed in the modeling procedure of soft sensors could be improved with adequate training samples. However, the rough environment of some industrial fields makes it difficult to acquire enough samples for soft sensor modeling. Generative adversarial networks (GANs) and the variational autoencoder (VAE) are two prominent methods that have been employed for learning generative models. In the current work, the VA-WGAN combining VAE with Wasserstein generative adversarial networks (WGAN) as a generative model is established to produce new samples for soft sensors by using the decoder of VAE as the generator in WGAN. An actual industrial soft sensor with insufficient data is used to verify the data generation capability of the proposed model. According to the experimental results, the samples obtained with the proposed model more closely resemble the true samples compared with the other four common generative models. Moreover, the insufficiency of the training data and the prediction precision of soft sensors could be improved via these constructed samples.  相似文献   

12.
Over the past years, some artificial intelligence techniques like artificial neural networks have been widely used in the hydrological modeling studies. In spite of their some advantages, these techniques have some drawbacks including possibility of getting trapped in local minima, overtraining and subjectivity in the determining of model parameters. In the last few years, a new alternative kernel-based technique called a support vector machines (SVM) has been found to be popular in modeling studies due to its advantages over popular artificial intelligence techniques. In addition, the relevance vector machines (RVM) approach has been proposed to recast the main ideas behind SVM in a Bayesian context. The main purpose of this study is to examine the applicability and capability of the RVM on long-term flow prediction and to compare its performance with feed forward neural networks, SVM, and multiple linear regression models. Meteorological data (rainfall and temperature) and lagged data of rainfall were used in modeling application. Some mostly used statistical performance evaluation measures were considered to evaluate models. According to evaluations, RVM method provided an improvement in model performance as compared to other employed methods. In addition, it is an alternative way to popular soft computing methods for long-term flow prediction providing at least comparable efficiency.  相似文献   

13.
The abundant computing resources in current organizations provide new opportunities for executing parallel scientific applications and using resources. The Enterprise Desktop Grid Computing (EDGC) paradigm addresses the potential for harvesting the idle computing resources of an organization’s desktop PCs to support the execution of the company’s large-scale applications. In these environments, the accuracy of response-time predictions is essential for effective metascheduling that maximizes resource usage without harming the performance of the parallel and local applications. However, this accuracy is a major challenge due to the heterogeneity and non-dedicated nature of EDGC resources. In this paper, two new prediction techniques are presented based on the state of resources. A thorough analysis by linear regression demonstrated that the proposed techniques capture the real behavior of the parallel applications better than other common techniques in the literature. Moreover, it is possible to reduce deviations with a proper modeling of prediction errors, and thus, a Self-adjustable Correction method (SAC) for detecting and correcting the prediction deviations was proposed with the ability to adapt to the changes in load conditions. An extensive evaluation in a real environment was conducted to validate the SAC method. The results show that the use of SAC increases the accuracy of response-time predictions by 35%. The cost of predictions with self-correction and its accuracy in a real environment was analyzed using a combination of the proposed techniques. The results demonstrate that the cost of predictions is negligible and the combined use of the prediction techniques is preferable.  相似文献   

14.
We present a method for mapping a given Bayesian network to a Boltzmann machine architecture, in the sense that the the updating process of the resulting Boltzmann machine model probably converges to a state which can be mapped back to a maximum a posteriori (MAP) probability state in the probability distribution represented by the Bayesian network. The Boltzmann machine model can be implemented efficiently on massively parallel hardware, since the resulting structure can be divided into two separate clusters where all the nodes in one cluster can be updated simultaneously. This means that the proposed mapping can be used for providing Bayesian network models with a massively parallel probabilistic reasoning module, capable of finding the MAP states in a computationally efficient manner. From the neural network point of view, the mapping from a Bayesian network to a Boltzmann machine can be seen as a method for automatically determining the structure and the connection weights of a Boltzmann machine by incorporating high-level, probabilistic information directly into the neural network architecture, without recourse to a time-consuming and unreliable learning process.  相似文献   

15.
Streaming time series segmentation is one of the major problems in streaming time series mining, which can create the high-level representation of streaming time series, and thus can provide important supports for many time series mining tasks, such as indexing, clustering, classification, and discord discovery. However, the data elements in streaming time series, which usually arrive online, are fast-changing and unbounded in size, consequently, leading to a higher requirement for the computing efficiency of time series segmentation. Thus, it is a challenging task how to segment streaming time series accurately under the constraint of computing efficiency. In this paper, we propose exponential smoothing prediction-based segmentation algorithm (ESPSA). The proposed algorithm is developed based on a sliding window model, and uses the typical exponential smoothing method to calculate the smoothing value of arrived data element of streaming time series as the prediction value of the future data. Besides, to determine whether a data element is a segmenting key point, we study the statistical characteristics of the prediction error and then deduce the relationship between the prediction error and the compression rate. The extensive experiments on both synthetic and real datasets demonstrate that the proposed algorithm can segment streaming time series effectively and efficiently. More importantly, compared with candidate algorithms, the proposed algorithm can reduce the computing time by orders of magnitude.  相似文献   

16.
为了解决工业过程受本身结构特征、外界因素等影响而存在严重的非线性和时变性等问题,本文提出了一种基于输入输出综合性相似度指标的即时学习高斯过程软测量建模方法。在该方法中,将样本数据进行归一化处理,首先利用传统的基于距离和角度的相似度指标分别对样本输入输出变量进行相似度计算,进而对相似度进行综合,最后选择出最终的相关样本集,建立高斯过程回归软测量模型,将所提基于输入输出相似度指标的即时学习高斯工程软测量模型应用于城市日用电量数据的预测。研究结果表明,所提出的软测量建模方法可以实现对日用电量数据的高精度预测且预测结果具有较小的误差。因此可表明该方法可在电量预测中具有一定的应用可靠性,可以在电力市场预测分析中得到广泛的应用。  相似文献   

17.
The prediction accuracy of multi-fidelity models can be enhanced by incorporating gradient formation. However, the computational complexity would increase dramatically as the number of design variables increase. In this work, a gradient-enhanced multi-fidelity Gaussian process model using a portion of gradients (PGEMFGP) is proposed. To be specific, a Bayesian Gaussian process regression model for multi-fidelity (MF) data fusion is developed, which incorporates high-fidelity (HF) and low-fidelity (LF) responses, as well as the corresponding gradients. A screening technique based on distance correlation is applied to select a portion of gradients of the low-fidelity model so that the modeling complexity can be greatly reduced. The merit of the proposed method is tested with six numerical examples ranging from 10-D to 30-D, as well as an aerodynamic airfoil case with 18 design variables. The proposed method is compared to two other existing gradient-enhanced Gaussian process-based models. It is shown that the modeling efficiency of the proposed model is dramatically improved compared to the original gradient-enhanced multi-fidelity Gaussian process model, while the loss of the prediction accuracy can be almost negligible. In consequence, it can be a promising approach for gradient-enhanced models dealing with multi-fidelity data.  相似文献   

18.
Bayesian estimation is a major and robust estimator for many advanced statistical models. Being able to incorporate prior knowledge in statistical inference, Bayesian methods have been successfully applied in many different fields such as business, computer science, economics, epidemiology, genetics, imaging, and political science. However, due to its high computational complexity, Bayesian estimation has been deemed difficult, if not impractical, for large-scale databases, stream data, data warehouses, and data in the cloud. In this paper, we propose a novel compression and aggregation schemes (C&A) that enables distributed, parallel, or incremental computation of Bayesian estimates. Assuming partitioning of a large dataset, the C&A scheme compresses each partition into a synopsis and aggregates the synopsis into an overall Bayesian estimate without accessing the raw data. Such a C&A scheme can find applications in OLAP for data cubes, stream data mining, and cloud computing. It saves tremendous computing time since it processes each partition only once, enabling fast incremental update, and allows parallel processing. We prove that the compression is asymptotically lossless in the sense that the aggregated estimator deviates from the true model by an error that is bounded and approaches to zero when the data size increases. The results show that the proposed C&A scheme can make feasible OLAP of Bayesian estimates in a data cube. Further, it supports real-time Bayesian analysis of stream data, which can only be scanned once and cannot be permanently retained. Experimental results validate our theoretical analysis and demonstrate that our method can dramatically save time and space costs with almost no degradation of the modeling accuracy.  相似文献   

19.
Pattern drift is a common issue for machine learning in real applications, as the distribution generating the data may change under nonstationary environmental/operational conditions. In our previous work, a strategy based on Feature Vector Selection (FVS) has been proposed for enabling a Support Vector Regression (SVR) model to adaptively update with streaming data, but the proposed strategy suffers from the incapability of treating recurring patterns. An instance-based online learning approach is proposed in this paper, which can adaptively update an SVR-based ensemble model with steaming data points. The proposed approach reduces the computational complexity of the updating process by selecting only part of the newly available data and allows following timely the ongoing patterns by resorting to FVS. The proposed approach creates new sub-models directly from a basic model and the sub-models represent separately the data stream at different periods. A dynamic ensemble selection strategy is integrated in the approach to select the sub-models most relevant to the new data point for deriving the prediction, while reducing the influence of the irrelevant ones. The weights of the different models in the ensemble are updated, based on their prediction errors. Comparison results with several benchmark approaches on several synthetic datasets and on the dataset concerning the leakage from the first seal in a Reactor Coolant Pump, prove the efficiency and accuracy of the proposed online learning ensemble approach.  相似文献   

20.
针对并行处理H.264标准视频流解码问题,提出基于CPU/GPU的协同运算算法。以统一设备计算架构(CUDA)语言作为GPU编程模型,实现DCT逆变换与帧内预测在GPU中的加速运算。在保持较高计算精度的前提下,结合CUDA混合编程,提高系统的计算性能。利用NIVIDIA提供的CUDA语言,在解码过程中使DCT逆变换和帧内预测在GPU上并行实现,将并行算法与CPU单机实现进行比较,并用不同数量的视频流验证并行解码算法的加速效果。实验结果表明,该算法可大幅提高视频流的编解码效率,比CPU单机的平均计算加速比提高10倍。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号