首页 | 官方网站   微博 | 高级检索  
     


A neural network study on the dynamic identification of a fermentation system
Authors:Mei-J. Syu  Cheng-L. Hou
Affiliation:Department of Chemical Engineering, National Cheng Kung University, Tainan, Taiwan 70101, R.O.C. syumj@alpha2.dec.ncku.edu.tw, TW
Abstract:The objective of this paper is to propose neural networks for the study of dynamic identification and prediction of a fermentation system which produces mainly 2,3-butanediol (2,3-BDL). The metabolic products of the fermentation, acetic acid, acetoin, ethanol, and 2,3-BDL were measured on-line via a mass spectrometer modified by the insertion of a dimethylvinylsilicone membrane probe. The measured data at different sampling times were included as the input and output nodes, at different learning batches, of the network. A fermentation system is usually nonlinear and dynamic in nature. Measured fermentation data obtained from the complex metabolic pathways are often difficult to be entirely included in a static process model, therefore, a dynamic model was suggested instead. In this work, neural networks were provided by a dynamic learning and prediction process that moved along the time sequence batchwise. In other words, a scheme of two-dimensional moving window (number of input nodes by the number of training data) was proposed for reading in new data while forgetting part of the old data. Proper size of the network including proper number of input/output nodes were determined by trained with the real-time fermentation data. Different number of hidden nodes under the consideration of both learning performance and computation efficiency were tested. The data size for each learning batch was determined. The performance of the learning factors such as the learning coefficient η and the momentum term coefficient α were also discussed. The effect of different dynamic learning intervals, with different starting points and the same ending point, both on the learning and prediction performance were studied. On the other hand, the effect of different dynamic learning intervals, with the same starting point and different ending points, was also investigated. The size of data sampling interval was also discussed. The performance from four different types of transfer functions, x/(1+|x|), sgn(xx 2/(1+x 2), 2/(1+e ? x )?1, and 1/(1+e ? x ) was compared. A scaling factor b was added to the transfer function and the effect of this factor on the learning was also evaluated. The prediction results from the time-delayed neural networks were also studied.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号