首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
基于积分法的动态数据校正具有简单、快速和适于在线应用的优点。本文对积分法动态数据校正技术的原理及其应用方法进行了研究。研究结果表明,该方法不要求有状态空间模型,能够充分利用整个时间轴的时间冗余信息;但积分法中的区间长度对其校正精度有影响,因此,采用该方法进行校正时应首先确定适宜的区间长度。将积分法应用于常减压炼油装置拟稳态过程的数据校正,计算结果表明该方法的计算精度高于稳态数据校正。  相似文献   

2.
Data models capture the structure and characteristic properties of data entities, e.g., in terms of a database schema or an ontology. They are the backbone of diverse applications, reaching from information integration, through peer-to-peer systems and electronic commerce to social networking. Many of these applications involve models of diverse data sources. Effective utilisation and evolution of data models, therefore, calls for matching techniques that generate correspondences between their elements. Various such matching tools have been developed in the past. Yet, their results are often incomplete or erroneous, and thus need to be reconciled, i.e., validated by an expert. This paper analyses the reconciliation process in the presence of large collections of data models, where the network induced by generated correspondences shall meet consistency expectations in terms of integrity constraints. We specifically focus on how to handle data models that show some internal structure and potentially differ in terms of their assumed level of abstraction. We argue that such a setting calls for a probabilistic model of integrity constraints, for which satisfaction is preferred, but not required. In this work, we present a model for probabilistic constraints that enables reasoning on the correctness of individual correspondences within a network of data models, in order to guide an expert in the validation process. To support pay-as-you-go reconciliation, we also show how to construct a set of high-quality correspondences, even if an expert validates only a subset of all generated correspondences. We demonstrate the efficiency of our techniques for real-world datasets comprising database schemas and ontologies from various application domains.  相似文献   

3.
基于独立物流的过程数据校正方法的研究   总被引:2,自引:2,他引:0  
数据校正技术已广泛用于许多不同的过程工业中,其中Simpson等提出的基于独立物流的数据校正方法用于多组分过程具有简单、快速、准确的特点,但该方法直接用于多组分过程数据校正尚有许多局限性。本文在分析Simpson法基本原理和缺陷的基础上,通过在目标函数中引入组分流率与总流率平衡的约束条件,给出了新的计算方法。实例研究表明,新方法克服了Simpson法的缺陷,可有效地用于无反应节点的多组分过程数据校正。  相似文献   

4.
Dynamic data reconciliation: Alternative to Kalman filter   总被引:2,自引:0,他引:2  
Process measurements are often corrupted with varying degrees of noise. Measurement noise undermines the performance of process monitoring and control systems. To reduce the impact of measurement noise, exponentially-weighted moving average and moving average filters are commonly used. These filters have good performance for processes under steady state or with slow dynamics. For processes with significant dynamics, more sophisticated filters, such as model-based filters, have to be used. The Kalman filter is a well known model-based filter that has been widely used in the aerospace industry. This paper discusses another model-based filter, the dynamic data reconciliation (DDR) filter. Both the Kalman and the DDR filters adhere to the same basic principle of using information from both measurements and models to provide a more reliable representation of the current state of the process. However, the DDR filter can more easily incorporated in a wide variety of model structures and is easier to understand and implement. Simulation results for a binary distillation column with four controlled variables showed that the DDR filters had equivalent performance to the Kalman filter in dealing with both white and autocorrelated noise.  相似文献   

5.
丁二烯生产中,由于不可知因素干扰和随机误差,来自DCS的实时数据普遍有随机误差或显著误差。如何纠正和抹平误差,对于提高数据精度和可靠性具有重要意义。本文描述的数据校正模块基于线性条件,根据实际情况将模块结构划为矩阵处理和校正两部分,并针对工业数据的特性,设计相应的数据结构和处理流程。从实际应用来看,在线性条件下,模块能够完成数据校正的功能。  相似文献   

6.
We introduce a form of spatiotemporal reasoning that uses homogeneous representations of time and the three dimensions of space. The basis of our approach is Allen's temporal logic on the one hand and general constraint satisfaction algorithms on the other, where we present a new view of constraint reasoning to cope with the affordances of spatiotemporal reasoning as introduced here. As a realization for constraint reasoning, we suggest a massively parallel implementation in form of Boltzmann machines.  相似文献   

7.
We present an effective approach to performing data flow analysis in parallel and identify three types of parallelism inherent in this solution process: independent-problem parallelism, separate-unit parallelism and algorithmic parallelism. We present our investigations of Fortran procedures from thePerfect Benchmarks andnetlib libraries, which reveal structural characteristics of program flow graphs that are amenable to algorithmic parallelism. Previously, the utility of algorithmic parallelism had been explored using our parallel hybrid algorithm in the context of solving the Reaching Definitions problem for Fortran procedures. Here we present new refinements that optimize performance by increasing the grain size of the parallelism, to improve communication on distributed-memory machines. The empirical performance of our optimized and unoptimized hybrid algorithms for Reaching Definitions are compared on this large data set using an iPSC/2. Our empirical findings qualitatively validate the usefulness of algorithmic parallelism.This research was supported, in part, by National Science Foundation grants CCR-8920078 and CCR-9023628-1, 2/5. An earlier version of this paper appears inProceedings of the 6th ACM International Conference on Supecomputing (Washington, D.C., July 1992), pp. 236–247.  相似文献   

8.
Data envelopment analysis (DEA) requires input and output data to be precisely known. This is not always the case in real applications. Sensitivity analysis of the additive model in DEA is studied in this paper while inputs and outputs are symmetric triangular fuzzy numbers. Sufficient conditions for simultaneous change of all outputs and inputs of an efficient decision-making unit (DMU) which preserves efficiency are established. Two kinds of changes on inputs and outputs are considered. For the first state, changes are exerted on the core and margin of symmetric triangular fuzzy numbers so that the value of inputs increase and the value of outputs decrease. In the second state, a non-negative symmetric triangular fuzzy number is subtracted from outputs to decrease outputs and it is added to inputs to increase inputs. A numerical illustration is provided.  相似文献   

9.
The theoretical aspects of statistical inference with imprecise data, with focus on random sets, are considered. On the setting of coarse data analysis imprecision and randomness in observed data are exhibited, and the relationship between probability and other types of uncertainty, such as belief functions and possibility measures, is analyzed. Coarsening schemes are viewed as models for perception-based information gathering processes in which random fuzzy sets appear naturally. As an implication, fuzzy statistics is statistics with fuzzy data. That is, fuzzy sets are a new type of data and as such, complementary to statistical analysis in the sense that they enlarge the domain of applications of statistical science.  相似文献   

10.
稳态在线数据校正在炼油厂气体分离装置上的应用   总被引:2,自引:0,他引:2  
文章系统地研究了稳态过程在线数据校正技术,在具体实现中采用均值法进行稳态检测,修正系数法进行误差的侦破、识别,通过两层次变换进行数据分类。开发了稳态过程在线数据校正软件,并将校正后的数据作为输入值用于某炼油厂气体分离系统产品质量的在线预测,应用结果表明,采用校正后的数据作为输入值进行产品质量在线预测比直接用原始数据更稳定、更符合实际情况。  相似文献   

11.
Privacy issues represent a longstanding problem nowadays. Measures such as k-anonymity, l-diversity and t-closeness are among the most used ways to protect released data. This work proposes to extend these three measures when the data are protected using fuzzy sets instead of intervals or representative elements. The proposed approach is then tested using Energy Information Authority data set and different fuzzy partition methods. Results shows an improvement in protecting data when data are encoded using fuzzy sets.  相似文献   

12.
提高常减压炼油装置过程数据校正精度的有效手段是增加测量数据的冗余。通过对影响测量数据校正的因素进行分析,建立了时序平均值模型,提出了使用时间序列分析法来增加数据校正过程时间冗余的新方法,可在不增加测量点的前提下改善校正数据精度。采用常减压炼油生产装置的实际测量数据,对时间序列分析法的过失误差侦破能力、数据校正精度受时域值、过失误差大小的影响等进行了研究,探讨了时序法的可行性和实用性。结果表明,时序法可以快速有效地用于该装置的实时测量数据校正。  相似文献   

13.
针对传统碳一过程数据协调方法中过于复杂的编程和计算,提出一种基于MATLAB优化工具箱数据协调的新方法,该方法通过物料平衡关系建立焦化碳一过程的数据协调数学模型并利用MATLAB非线性优化工具箱强大的数值计算能力进行编程实现,实际应用于焦化碳一过程的数据校正结果表明,该方法是有效的,同时对其他相似过程的数据协调建模和求解也有一定的借鉴作用。  相似文献   

14.
15.
针对传统的灰色关联故障诊断算法无法解决大样本数据的问题,提出一种基于模糊分割灰关联的QAR(quick ac‐cess recorder)故障数据检测算法。鉴于QAR数据是数据结构复杂、维数高的时间序列数据,利用模糊分割将QAR数据分割成不重叠的子序列,从中提取出待检测故障序列,计算待检测序列与标准序列间的灰关联度,根据最大关联度原则进行故障模式识别并确定诊断结果。仿真结果表明,该算法的预测结果与实际情况基本一致,具有简单可靠、诊断精度高等优点。  相似文献   

16.
The method for obtaining the fuzzy least squares estimators with the help of the extension principle in fuzzy sets theory is proposed. The membership functions of fuzzy least squares estimators will be constructed according to the usual least squares estimators. In order to obtain the membership value of any given value taken from the fuzzy least squares estimator, optimization problems have to be solved. We also provide the methodology for evaluating the predicted fuzzy output from the given fuzzy input data.  相似文献   

17.
IDEF1X has provided a formal framework for consistent modeling of the data necessary for the integration of various functional areas in computer integrated manufacturing (CIM). The basic idea has been extensively applied in current manufacturing industry. Imprecise and uncertain information, however, is generally involved in many engineering activities. It is especially true for constructing intelligent manufacturing systems. This paper provides extensions to the IDEF1X, which makes it possible to represent fuzzy information.  相似文献   

18.
 We address the problem of the representation of resemblances involved in analogical reasoning. We use fuzzy relations to compare situations. We provide constructive methods to adapt the solution of an already solved situation to a similar new situation according to the degree of resemblance between these two situations. We give a general definition of analogical scheme which can be considered from a more or less constrained point of view.  相似文献   

19.
基于模糊数据挖掘与遗传算法的异常检测方法   总被引:4,自引:0,他引:4  
建立合适的隶属度函数是入侵检测中应用模糊数据挖掘所面临的一个难点。针对这一问题,提出了在异常检测中运用遗传算法对隶属度函数的参数进行优化的方法。将隶属度函数的参数组合成有序的参数集并编码为遗传个体,在个体的遗传进化中嵌入模糊数据挖掘,可以搜索到最佳的参数集。采用这一参数集,能够在实时检测中最大限度地将系统正常状态与异常状态区分开来,提高异常检测的准确性。最后,对网络流量的异常检测实验验证了这一方法的可行性。  相似文献   

20.
一种适合用于处理中药指纹图谱数据的偏最小二乘法   总被引:3,自引:3,他引:3  
中药指纹图谱数据具有变量数很大而样本数较小的特点,本文中采用拉格朗日求极值的方法导出一种新的适合用于处理这类数据的偏最小二乘算法。结果表明:所得到新的算法,在处理中药指纹图谱数据时,与传统的偏最小二乘算法比较,节省存储单元,计算量小,计算速度快,因而计算效率高。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号