首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Dynamic data reconciliation: Alternative to Kalman filter   总被引:2,自引:0,他引:2  
Process measurements are often corrupted with varying degrees of noise. Measurement noise undermines the performance of process monitoring and control systems. To reduce the impact of measurement noise, exponentially-weighted moving average and moving average filters are commonly used. These filters have good performance for processes under steady state or with slow dynamics. For processes with significant dynamics, more sophisticated filters, such as model-based filters, have to be used. The Kalman filter is a well known model-based filter that has been widely used in the aerospace industry. This paper discusses another model-based filter, the dynamic data reconciliation (DDR) filter. Both the Kalman and the DDR filters adhere to the same basic principle of using information from both measurements and models to provide a more reliable representation of the current state of the process. However, the DDR filter can more easily incorporated in a wide variety of model structures and is easier to understand and implement. Simulation results for a binary distillation column with four controlled variables showed that the DDR filters had equivalent performance to the Kalman filter in dealing with both white and autocorrelated noise.  相似文献   

2.
A method for detection and estimation of measurement bias in nonlinear dynamic processes is presented. It employs model-based data reconciliation and requires the examination of the resulting difference between the measured and reconciled values. Since bias is commonly present in process measurements, this technique is an important step toward the ultimate goal of reconciling ‘raw’ process data that may contain bias and gross errors in addition to small random errors. A CSTR example shows that this method does allow for the detection of a single bias in a nonlinear dynamic process whether or not the exact model equations are known.  相似文献   

3.
4.
Adapting integrity enforcement techniques for data reconciliation   总被引:2,自引:0,他引:2  
Integration of data sources opens up possibilities for new and valuable applications of data that cannot be supported by the individual sources alone. Unfortunately, many data integration projects are hindered by the inherent heterogeneities in the sources to be integrated. In particular, differences in the way that real world data is encoded within sources can cause a range of difficulties, not least of which is that the conflicting semantics may not be recognised until the integration project is well under way. Once identified, semantic conflicts of this kind are typically dealt with by configuring a data transformation engine, that can convert incoming data into the form required by the integrated system. However, determination of a complete and consistent set of data transformations for any given integration task is far from trivial. In this paper, we explore the potential application of techniques for integrity enforcement in supporting this process. We describe the design of a data reconciliation tool (LITCHI) based on these techniques that aims to assist taxonomists in the integration of biodiversity data sets. Our experiences have highlighted several limitations of integrity enforcement when applied to this real world problem, and we describe how we have overcome these in the design of our system.  相似文献   

5.
针对目前动态数据校正方法存在的缺陷,本文基于鲁棒估计的原理,提出一种新型的鲁棒估计函数,该函数物理概念清晰,参数调节灵活。基于此函数构造的动态数据校正方法(IRDR),在校正随机误差的同时,可以同步对异常点过失误差进行侦破和识别。CSTR仿真实例表明,该方法可以准确识别出系统所含的多个过失误差,校正结果偏差小,曲线平滑,具有较强优越性。  相似文献   

6.
基于积分法的动态数据校正具有简单、快速和适于在线应用的优点。本文对积分法动态数据校正技术的原理及其应用方法进行了研究。研究结果表明,该方法不要求有状态空间模型,能够充分利用整个时间轴的时间冗余信息;但积分法中的区间长度对其校正精度有影响,因此,采用该方法进行校正时应首先确定适宜的区间长度。将积分法应用于常减压炼油装置拟稳态过程的数据校正,计算结果表明该方法的计算精度高于稳态数据校正。  相似文献   

7.
Data models capture the structure and characteristic properties of data entities, e.g., in terms of a database schema or an ontology. They are the backbone of diverse applications, reaching from information integration, through peer-to-peer systems and electronic commerce to social networking. Many of these applications involve models of diverse data sources. Effective utilisation and evolution of data models, therefore, calls for matching techniques that generate correspondences between their elements. Various such matching tools have been developed in the past. Yet, their results are often incomplete or erroneous, and thus need to be reconciled, i.e., validated by an expert. This paper analyses the reconciliation process in the presence of large collections of data models, where the network induced by generated correspondences shall meet consistency expectations in terms of integrity constraints. We specifically focus on how to handle data models that show some internal structure and potentially differ in terms of their assumed level of abstraction. We argue that such a setting calls for a probabilistic model of integrity constraints, for which satisfaction is preferred, but not required. In this work, we present a model for probabilistic constraints that enables reasoning on the correctness of individual correspondences within a network of data models, in order to guide an expert in the validation process. To support pay-as-you-go reconciliation, we also show how to construct a set of high-quality correspondences, even if an expert validates only a subset of all generated correspondences. We demonstrate the efficiency of our techniques for real-world datasets comprising database schemas and ontologies from various application domains.  相似文献   

8.
Dynamic data reconciliation problems are discussed from the perspective of the mathematical theory of ill-posed inverse problems. Regularization is of crucial importance to obtain satisfactory estimation quality of the reconciled variables. Usually, some penalty is added to the least-squares objective to achieve a well-posed problem. However, appropriate discretization schemes of the time-continuous problem act themselves as regularization, reducing the need of problem modification. Based on this property, we suggest to refine successively the discretization of the continuous problem starting from a coarse grid, to find a suitable regularization which renders a good compromise between (measurement) data and regularization error in the estimate. In particular, our experience supports the conjecture, that non-equidistant discretization grids offer advantages over uniform grids.  相似文献   

9.
可靠的测量数据是化工过程建模的关键.流程工业中的变量测量值不可避免带有误差.如果这些带有噪声误差的测量数据直接作为模型辨识的样本数据,由此得到的模型势必和真实流程模型大相径庭.本文提出了一种基于双线性数据校正的多组分过程容错建模方法,利用双线性数据校正方法来对测量数据进行预处理,然后再将协调结果和其他的测量数据一起作为模型辨识的输入,从而减少测量数据误差对于过程建模的影响.等离子裂解煤制乙炔气体分离工段的仿真结果证明了该方法的优越性.  相似文献   

10.
Data reconciliation has played a significant role in rectifying process data which can meet the conservation laws in industrial processes. Generally, the actual measurements are often easily contaminated by different gross errors. Thus, it is essential to build robust data reconciliation methods to alleviate the impact of gross errors and provide accurate data. In this paper, a novel robust estimator is proposed to improve the robustness of data reconciliation method, which is based on a new robust estimation function. First, the main robust properties are analyzed with its objective and influence functions for the proposed robust estimator. Then, the effectiveness of the new robust data reconciliation method is demonstrated on a linear numerical case and a nonlinear example. Moreover, it is further used to a practical industrial evaporation production process, which also demonstrates that the process data can be better reconciled with the proposed robust estimator.  相似文献   

11.
Reconciliation is the process of providing a consistent view of the data imported from different sources. Despite some efforts reported in the literature for providing data reconciliation solutions with asynchronous collaboration, the challenge of reconciling data when multiple users work asynchronously over local copies of the same imported data has received less attention. In this paper, we propose AcCORD, an asynchronous collaborative data reconciliation model based on data provenance. AcCORD is innovative because it supports applications in which all users are required to agree on the data values to provide a single consistent view to all of them, as well as applications that allow users to disagree on the data values to keep in their local copies but promote collaboration by sharing integration decisions. We also introduce a decision integration propagation method that keeps users from taking inconsistent decisions over data items present in several sources. Further, different policies based on data provenance are proposed for solving conflicts among multiusers' integration decisions. Our experimental analysis shows that AcCORD is efficient and effective. It performs well, and the results highlight its flexibility by generating either a single integrated view or different local views. We have also conducted interviews with end users to analyze the proposed policies and feasibility of the multiuser reconciliation. They provide insights with respect to acceptability, consistency, correctness, time‐saving, and satisfaction. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

12.
Data reconciliation consists in modifying noisy or unreliable data in order to make them consistent with a mathematical model (herein a material flow network). The conventional approach relies on least-squares minimization. Here, we use a fuzzy set-based approach, replacing Gaussian likelihood functions by fuzzy intervals, and a leximin criterion. We show that the setting of fuzzy sets provides a generalized approach to the choice of estimated values, that is more flexible and less dependent on oftentimes debatable probabilistic justifications. It potentially encompasses interval-based formulations and the least squares method, by choosing appropriate membership functions and aggregation operations. This paper also lays bare the fact that data reconciliation under the fuzzy set approach is viewed as an information fusion problem, as opposed to the statistical tradition which solves an estimation problem.  相似文献   

13.
This paper introduces an application of simultaneous nonlinear data reconciliation and gross error detection for power plants utilizing a complex but computationally light first principle combustion model. Element and energy balances and robust techniques introduce nonlinearity and the consequent optimization problem is solved using nonlinear optimization. Data reconciliation improves estimation of process variables and enables improved sensor quality control and identification of process anomalies. The approach was applied to an industrial 200 MWth fluidized bed boiler combusting wood, peat, bark, and slurry. The results indicate that the approach is valid and is able to perform in various process conditions. As the combustion model is generic, the method is applicable in any boiler environment.  相似文献   

14.
用于过失测量数据侦破与校正的改进MT-NT算法   总被引:1,自引:0,他引:1  
介绍了一种用于过失误差侦破和校正改进的MT-NT算法。改进后的算法采用逐次侦破、校正的策略,有效地解决了在侦破过失误差过程中出现的系数矩阵降秩问题,减少了运算量,增加了信息的可用性和完整性。给出了该算法的框图及步骤,并采用面向对象的方法和C 十语言编制出了过程测量数据校正软件。经过实例验证,该算法可有效侦破测量数据中的过失误差,避免了在运算过程中出现的系数矩阵降秩问题,具有一定的实用性。  相似文献   

15.
The entity reconciliation (ER) problem aroused much interest as a research topic in today's Big Data era, full of big and open heterogeneous data sources. This problem poses when relevant information on a topic needs to be obtained using methods based on: (i) identifying records that represent the same real world entity, and (ii) identifying those records that are similar but do not correspond to the same real-world entity. ER is an operational intelligence process, whereby organizations can unify different and heterogeneous data sources in order to relate possible matches of non-obvious entities. Besides, the complexity that the heterogeneity of data sources involves, the large number of records and differences among languages, for instance, must be added. This paper describes a Systematic Mapping Study (SMS) of journal articles, conferences and workshops published from 2010 to 2017 to solve the problem described before, first trying to understand the state-of-the-art, and then identifying any gaps in current research. Eleven digital libraries were analyzed following a systematic, semiautomatic and rigorous process that has resulted in 61 primary studies. They represent a great variety of intelligent proposals that aim to solve ER. The conclusion obtained is that most of the research is based on the operational phase as opposed to the design phase, and most studies have been tested on real-world data sources, where a lot of them are heterogeneous, but just a few apply to industry. There is a clear trend in research techniques based on clustering/blocking and graphs, although the level of automation of the proposals is hardly ever mentioned in the research work.  相似文献   

16.
化工过程测虽数据作为反映装置运行状况的特征信息,是实现计算机过程控制、模拟、优化和生产管理的基本依据.研究过程数据校正技术,对实现装置优化控制与管理具有重要理论意义和现实意义.现有理论研究大都采用传统统计检验和线性化处理方法,在实际应用有较大局限性.本文在对已有数据校正技术分析的基础上,提出将修正的时间序列分析法用于测量数据校正.综合考虑数据的窄间冗余和时间冗余,充分利用过程的历史数据,建立了时间序列概率模型,并针对含随机误差数据和含过失误差数据两种情况,从时序法平均值、阶跃过程模型等方面详细探讨数据校正方法.将新的数据校正方法用于典型常减压蒸馏装置,结果表明,新方法能够侦破出数据中含有的过失误差;校正值与真值的平均偏差非常小,具有足够的精度保证数据的准确性;修正的时间序列分析法用于数据校正能克服传统方法的局限性.  相似文献   

17.
非线性动态数据校正方法的研究   总被引:1,自引:1,他引:0  
基于有限元正交配置法的动态数据校正技术,用于非线性动态系统的数据校正效果很好,并且可以求解具有代数不等式约束,或者是简单的变量上下限约束的问题.本文研究了有限元正交配置法动态数据校正技术的原理及应用方法.结果,当配置点数不小于3时,对校正精度的影响较小;但有限元数目不同,则有影响,因此,采用此方法应首先确定适合的有限元数目.  相似文献   

18.
基于独立物流的过程数据校正方法的研究   总被引:2,自引:2,他引:0  
数据校正技术已广泛用于许多不同的过程工业中,其中Simpson等提出的基于独立物流的数据校正方法用于多组分过程具有简单、快速、准确的特点,但该方法直接用于多组分过程数据校正尚有许多局限性。本文在分析Simpson法基本原理和缺陷的基础上,通过在目标函数中引入组分流率与总流率平衡的约束条件,给出了新的计算方法。实例研究表明,新方法克服了Simpson法的缺陷,可有效地用于无反应节点的多组分过程数据校正。  相似文献   

19.
In state estimation problems, often, the true states satisfy certain constraints resulting from the physics of the problem, that need to be incorporated and satisfied during the estimation procedure. Amongst various constrained nonlinear state estimation algorithms proposed in the literature, the unscented recursive nonlinear dynamic data reconciliation (URNDDR) presented in [1] seems to be promising since it is able to incorporate constraints while maintaining the recursive nature of estimation. In this article, we propose a modified URNDDR algorithm that gives superior performance when compared with the basic URNDDR. The improvements are obtained via better constraint handling and are demonstrated on representative case studies [2], [3]. In addition to this modification, an efficient strategy combining basic unscented Kalman filter (UKF), URNDDR and modified URNDDR is also proposed in this article for solving large scale state estimation problems at relatively low computational cost. The utility of the proposed strategy is demonstrated by applying it to the Tennessee Eastman challenge process [4].  相似文献   

20.
丁二烯生产中,由于不可知因素干扰和随机误差,来自DCS的实时数据普遍有随机误差或显著误差。如何纠正和抹平误差,对于提高数据精度和可靠性具有重要意义。本文描述的数据校正模块基于线性条件,根据实际情况将模块结构划为矩阵处理和校正两部分,并针对工业数据的特性,设计相应的数据结构和处理流程。从实际应用来看,在线性条件下,模块能够完成数据校正的功能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号