首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
Many problems are confronted when characterizing a type 1 diabetic patient such as model mismatches, noisy inputs, measurement errors and huge variability in the glucose profiles. In this work we introduce a new identification method based on interval analysis where variability and model imprecisions are represented by an interval model as parametric uncertainty.The minimization of a composite cost index comprising: (1) the glucose envelope width predicted by the interval model, and (2) a Hausdorff-distance-based prediction error with respect to the envelope, is proposed. The method is evaluated with clinical data consisting in insulin and blood glucose reference measurements from 12 patients for four different lunchtime postprandial periods each.Following a “leave-one-day-out” cross-validation study, model prediction capabilities for validation days were encouraging (medians of: relative error = 5.45%, samples predicted = 57%, prediction width = 79.1 mg/dL). The consideration of the days with maximum patient variability represented as identification days, resulted in improved prediction capabilities for the identified model (medians of: relative error = 0.03%, samples predicted = 96.8%, prediction width = 101.3 mg/dL). Feasibility of interval models identification in the context of type 1 diabetes was demonstrated.  相似文献   

2.
At the Ejby Mølle WWTP in Odense Denmark a software sensor predicts the ammonium and nitrite + nitrate concentration in real-time based on ammonium and redox potential measurements. The predicted ammonium concentration is used to control the length of the nitrification phase in a Biodenipho® activated sludge unit because the software sensor has a shorter response time and a better up-time than the ammonium meter. The software sensor simplifies meter service and can reduce maintenance costs. The computed nitrite + nitrate concentration is an added benefit of the software sensor. On 4 different days, series of grab samples of the mixed liquor were collected in the aeration tanks. The average difference between the ammonium concentrations in the grab samples and the predicted ammonium concentration was 0.2 mgN L?1 and the average difference between the predicted and the measured nitrite + nitrate concentration was 0.3 mgN L?1. The agreement between the predicted and the measured ammonium concentration in the grab samples was better than the agreement between the ammonium meter and the grab samples. This was due to the shorter response time of the software sensor compared with the ammonium meter.  相似文献   

3.
This study investigated the effects of upstream stations’ flow records on the performance of artificial neural network (ANN) models for predicting daily watershed runoff. As a comparison, a multiple linear regression (MLR) analysis was also examined using various statistical indices. Five streamflow measuring stations on the Cahaba River, Alabama, were selected as case studies. Two different ANN models, multi layer feed forward neural network using Levenberg–Marquardt learning algorithm (LMFF) and radial basis function (RBF), were introduced in this paper. These models were then used to forecast one day ahead streamflows. The correlation analysis was applied for determining the architecture of each ANN model in terms of input variables. Several statistical criteria (RMSE, MAE and coefficient of correlation) were used to check the model accuracy in comparison with the observed data by means of K-fold cross validation method. Additionally, residual analysis was applied for the model results. The comparison results revealed that using upstream records could significantly increase the accuracy of ANN and MLR models in predicting daily stream flows (by around 30%). The comparison of the prediction accuracy of both ANN models (LMFF and RBF) and linear regression method indicated that the ANN approaches were more accurate than the MLR in predicting streamflow dynamics. The LMFF model was able to improve the average of root mean square error (RMSEave) and average of mean absolute percentage error (MAPEave) values of the multiple linear regression forecasts by about 18% and 21%, respectively. In spite of the fact that the RBF model acted better for predicting the highest range of flow rate (flood events, RMSEave/RBF = 26.8 m3/s vs. RMSEave/LMFF = 40.2 m3/s), in general, the results suggested that the LMFF method was somehow superior to the RBF method in predicting watershed runoff (RMSE/LMFF = 18.8 m3/s vs. RMSE/RBF = 19.2 m3/s). Eventually, statistical differences between measured and predicted medians were evaluated using Mann-Whitney test, and differences in variances were evaluated using the Levene's test.  相似文献   

4.
The absolute free energy difference of binding (ΔG) between neuraminidase and its inhibitor was evaluated using fast pulling of ligand (FPL) method over steered molecular dynamics (SMD) simulations. The metric was computed through linear interaction approximation. Binding nature was described by free energy differences of electrostatic and van der Waals (vdW) interactions. The finding indicates that vdW metric is dominant over electrostatics in binding process. The computed values are in good agreement with experimental data with a correlation coefficient of R = 0.82 and error of σΔGexp = 2.2 kcal/mol. The results were observed using Amber99SB-ILDN force field in comparison with CHARMM27 and GROMOS96 43a1 force fields. Obtained results may stimulate the search for an Influenza therapy.  相似文献   

5.
6.
This study utilized an external logger system for onsite measurements of computer activities of two professional groups—twelve university administrators and twelve computer-aided design (CAD) draftsmen. Computer use of each participant was recorded for 10 consecutive days—an average of 7.9 ± 1.8 workdays and 7.8 ± 1.5 workdays for administrators and draftsmen, respectively. Quantitative parameters computed using recorded data were daily dynamic duration (DD) and static duration, daily keystrokes, mouse clicks, wheel scrolling counts, mouse movement and dragged distance, average typing and clicking rates, and average time holding down keys and mouse buttons. Significant group differences existed in the number of daily keystrokes (p < 0.0005) and mouse clicks (p < 0.0005), mouse distance moved (p < 0.0005), typing rate (p < 0.0001), daily mouse DD (p < 0.0001), and keyboard DD (p < 0.005). Both groups had significantly longer mouse DD than keyboard DD (p < 0.0001). Statistical analysis indicates that the duration of computer use for different computer tasks cannot be represented by a single formula with same set of quantitative parameters as those associated with mouse and keyboard activities. Results of this study demonstrate that computer exposure during different tasks cannot be estimated solely by computer use duration. Quantification of onsite computer activities is necessary when determining computer-associated risk of musculoskeletal disorders. Other significant findings are discussed.  相似文献   

7.
This research applies artificial intelligence (AI) of unsupervised learning self-organizing map neural network (SOM-NN) to establish a model to select the superior funds. This research period is from year 2000 to 2010 and picks 100 domestic equity mutual funds as study object. This research used 30 days prior to the beginning of each month’s prior 30 days, 60 days, 90 days on fund’s net asset value and the Taiwan Weighted Stock Index (TAIEX) return as the fund’s relative performance evaluation indicators classified by month. Finally, based on the superior rate or the average return rate, this research select the superior funds and simulate investment transactions according to this model.The empirical results show that using the mutual fund’s net asset value and the TAIEX’s relative return as SOM-NN input variables not only finds out the superior fund but also has a good predictive ability. Applying this model to simulate investment transactions will be better than the random trading model and market. The experiments also found that the investment simulation of a three-month interval has the highest profitability. The model operation suggests that it is more suitable for short-term and medium-term investment. This research can assist investors in making the right investment decisions while facing rapid financial environment changes.  相似文献   

8.
This paper deals with the development of acoustic source localization algorithms for service robots working in real conditions. One of the main utilizations of these algorithms in a mobile robot is that the robot can localize a human operator and eventually interact with him/herself by means of verbal commands. The location of a speaking operator is detected with a microphone array based algorithm; localization information is passed to a navigation module which sets up a navigation mission using knowledge of the environment map. In fact, the system we have developed aims at integrating acoustic, odometric and collision sensors with the mobile robot control architecture. Good performance with real acoustic data have been obtained using neural network approach with spectral subtraction and a noise robust voice activity detector. The experiments show that the average absolute localization error is about 40 cm at 0 dB and about 10 cm at 10 dB of SNR for the named localization. Experimental results describing mobile robot performance in a talker following task are reported.  相似文献   

9.
The most practical way to get spatially broad and continuous measurements of the surface temperature in the data-sparse cryosphere is by satellite remote sensing. The uncertainties in satellite-derived LSTs must be understood to develop internally-consistent decade-scale land surface temperature (LST) records needed for climate studies. In this work we assess satellite-derived “clear-sky” LST products from the Moderate Resolution Imaging Spectroradiometer (MODIS) and the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), and LSTs derived from the Enhanced Thematic Mapper Plus (ETM+) over snow and ice on Greenland. When possible, we compare satellite-derived LSTs with in-situ air temperature observations from Greenland Climate Network (GC-Net) automatic weather stations (AWS). We find that MODIS, ASTER and ETM+ provide reliable and consistent LSTs under clear-sky conditions and relatively-flat terrain over snow and ice targets over a range of temperatures from ? 40 to 0 °C. The satellite-derived LSTs agree within a relative RMS uncertainty of ~ 0.5 °C. The good agreement among the LSTs derived from the various satellite instruments is especially notable since different spectral channels and different retrieval algorithms are used to calculate LST from the raw satellite data. The AWS record in-situ data at a “point” while the satellite instruments record data over an area varying in size from: 57 × 57 m (ETM+), 90 × 90 m (ASTER), or to 1 × 1 km (MODIS). Surface topography and other factors contribute to variability of LST within a pixel, thus the AWS measurements may not be representative of the LST of the pixel. Without more information on the local spatial patterns of LST, the AWS LST cannot be considered valid ground truth for the satellite measurements, with RMS uncertainty ~ 2 °C. Despite the relatively large AWS-derived uncertainty, we find LST data are characterized by high accuracy but have uncertain absolute precision.  相似文献   

10.
Various sensory and control signals in a Heating Ventilation and Air Conditioning (HVAC) system are closely interrelated which give rise to severe redundancies between original signals. These redundancies may cripple the generalization capability of an automatic fault detection and diagnosis (AFDD) algorithm. This paper proposes an unsupervised feature selection approach and its application to AFDD in a HVAC system. Using Ensemble Rapid Centroid Estimation (ERCE), the important features are automatically selected from original measurements based on the relative entropy between the low- and high-frequency features. The materials used is the experimental HVAC fault data from the ASHRAE-1312-RP datasets containing a total of 49 days of various types of faults and corresponding severity. The features selected using ERCE (Median normalized mutual information (NMI) = 0.019) achieved the least redundancies compared to those selected using manual selection (Median NMI = 0.0199) Complete Linkage (Median NMI = 0.1305), Evidence Accumulation K-means (Median NMI = 0.04) and Weighted Evidence Accumulation K-means (Median NMI = 0.048). The effectiveness of the feature selection method is further investigated using two well-established time-sequence classification algorithms: (a) Nonlinear Auto-Regressive Neural Network with eXogenous inputs and distributed time delays (NARX-TDNN); and (b) Hidden Markov Models (HMM); where weighted average sensitivity and specificity of: (a) higher than 99% and 96% for NARX-TDNN; and (b) higher than 98% and 86% for HMM is observed. The proposed feature selection algorithm could potentially be applied to other model-based systems to improve the fault detection performance.  相似文献   

11.
Most remote sensing algorithms for phytoplankton in inland waters aim at the retrieval of the pigment chlorophyll a (Chl a), as this pigment is a useful proxy for phytoplankton biomass. More recently, algorithms have been developed to quantify the pigment phycocyanin (PC), which is characteristic of cyanobacteria, a phytoplankton group of relative importance to inland water management due to their negative impact on water quality in response to eutrophication.We evaluated the accuracy of three published algorithms for the remote sensing of PC in inland waters, using an extensive database of field radiometric and pigment data obtained in the Netherlands and Spain in the period 2001–2005. The three algorithms (a baseline, single band ratio, and a nested band ratio approach) all target the PC absorption effect observed in reflectance spectra in the 620 nm region. We evaluated the sensitivity of the algorithms to errors in reflectance measurements and investigated their performance in cyanobacteria-dominated water bodies as well as in the presence of other phytoplankton pigments.All algorithms performed best in moderate to high PC concentrations (50–200 mg m? 3) and showed the most linear response to increasing PC in cyanobacteria-dominated waters. The highest errors showed at PC < 50 mg m? 3. In eutrophic waters, the presence of other pigments explained a tendency to overestimate the PC concentration. In oligotrophic waters, negative PC predictions were observed. At very high concentrations (PC > 200 mg m? 3), PC underestimations by the baseline and single band ratio algorithms were attributed to a non-linear relationship between PC and absorption in the 620 nm region. The nested band ratio gave the overall best fit between predicted and measured PC. For the Spanish dataset, a stable ratio of PC over cyanobacterial Chl a was observed, suggesting that PC is indeed a good proxy for cyanobacterial biomass. The single reflectance ratio was the only algorithm insensitive to changes in the amplitude of reflectance spectra, which were observed as a result of different measurement methodologies.  相似文献   

12.
《Parallel Computing》2014,40(5-6):144-158
One of the main difficulties using multi-point statistical (MPS) simulation based on annealing techniques or genetic algorithms concerns the excessive amount of time and memory that must be spent in order to achieve convergence. In this work we propose code optimizations and parallelization schemes over a genetic-based MPS code with the aim of speeding up the execution time. The code optimizations involve the reduction of cache misses in the array accesses, avoid branching instructions and increase the locality of the accessed data. The hybrid parallelization scheme involves a fine-grain parallelization of loops using a shared-memory programming model (OpenMP) and a coarse-grain distribution of load among several computational nodes using a distributed-memory programming model (MPI). Convergence, execution time and speed-up results are presented using 2D training images of sizes 100 × 100 × 1 and 1000 × 1000 × 1 on a distributed-shared memory supercomputing facility.  相似文献   

13.
Adaptive neuro-fuzzy inference system (ANFIS) models are proposed as an alternative approach of evaporation estimation for Yuvacik Dam. This study has three objectives: (1) to develop ANFIS models to estimate daily pan evaporation from measured meteorological data; (2) to compare the ANFIS model to the multiple linear regression (MLR) model; and (3) to evaluate the potential of ANFIS model. Various combinations of daily meteorological data, namely air temperature, relative humidity, solar radiation and wind speed, are used as inputs to the ANFIS so as to evaluate the degree of effect of each of these variables on daily pan evaporation. The results of the ANFIS model are compared with MLR model. Mean square error, average absolute relative error and coefficient of determination statistics are used as comparison criteria for the evaluation of the model performances. The ANFIS technique whose inputs are solar radiation, air temperature, relative humidity and wind speed, gives mean square errors of 0.181 mm, average absolute relative errors of 9.590% mm, and determination coefficient of 0.958 for Yuvacik Dam station, respectively. Based on the comparisons, it was found that the ANFIS technique could be employed successfully in modelling evaporation process from the available climatic data.  相似文献   

14.
Multi-temporal C-band SAR data (C-HH and C-VV), collected by ERS-2 and ENVISAT satellite systems, are compared with field observations of hydrology (i.e., inundation and soil moisture) and National Wetland Inventory maps (U.S. Fish and Wildlife Service) of a large forested wetland complex adjacent to the Patuxent and Middle Patuxent Rivers, tributaries of the Chesapeake Bay. Multi-temporal C-band SAR data were shown to be capable of mapping forested wetlands and monitoring hydroperiod (i.e., temporal fluctuations in inundation and soil moisture) at the study site, and the discrimination of wetland from upland was improved with 10 m digital elevation data. Principal component analysis was used to summarize the multi-temporal SAR data sets and to isolate the dominant temporal trend in inundation and soil moisture (i.e., relative hydroperiod). Significant positive, linear correlations were found between the first principal component and percent area flooded and soil moisture. The correlation (r2) between the first principal component (PC1) of multi-temporal C-HH SAR data and average soil moisture was 0.88 (p = < .0001) during the leaf-off season and 0.87 (p = < .0001) during the leaf-on season, while the correlation between PC1 and average percent area inundated was 0.82 (p = < .0001) and 0.47 (p = .0016) during the leaf-off and leaf-on seasons, respectively. When compared to field data, the SAR forested wetland maps identified areas that were flooded for 25% of the time with 63–96% agreement and areas flooded for 5% of the time with 44–89% agreement, depending on polarization and time of year. The results are encouraging and justify further studies to attempt to quantify the relative SAR-derived hydroperiod classes in terms of physical variables and also to test the application of SAR data to more diverse landscapes at a broader scale. The present evidence suggests that the SAR data will significantly improve routine wooded wetland mapping.  相似文献   

15.
Patient readmissions to intensive care units (ICUs) are associated with increased mortality, morbidity and costs. Current models for predicting ICU readmissions have moderate predictive value, and can utilize up to twelve variables that may be assessed at various points of the ICU inpatient stay. We postulate that greater predictive value can be achieved with fewer physiological variables, some of which can be assessed in the 24 h before discharge. A data mining approach combining fuzzy modeling with tree search feature selection was applied to a large retrospectively collected ICU database (MIMIC II), representing data from four different ICUs at Beth Israel Deaconess Medical Center, Boston. The goal was to predict ICU readmission between 24 and 72 h after ICU discharge. Fuzzy modeling combined with sequential forward selection was able to predict readmissions with an area under the receiver-operating curve (AUC) of 0.72 ± 0.04, a sensitivity of 0.68 ± 0.02 and a specificity of 0.73 ± 0.03. Variables selected as having the highest predictive power include mean heart rate, mean temperature, mean platelets, mean non-invasive arterial blood pressure (mean), mean spO2, and mean lactic acid, during the last 24 h before discharge. Collection of the six predictive variables selected is not complex in modern ICUs, and their assessment may help support the development of clinical management plans that potentially mitigate the risk of readmission.  相似文献   

16.
The implicit Colebrook–White equation has been widely used to estimate the friction factor for turbulent fluid-flow in rough-pipes. In this paper, the state-of-the-art review for the most currently available explicit alternatives to the Colebrook–White equation, is presented. An extensive comparison test was established on the 20 × 500 grid, for a wide range of relative roughness (ε/D) and Reynolds number (R) values (1 × 10?6 ? ε/D ? 5 × 10?2; 4 × 103 ? R ? 108), covering a large portion of turbulent flow zone in Moody’s diagram. Based on the comprehensive error analysis, the magnitude points in which the maximum absolute and the maximum relative error are occurred at the pair of ε/D and R values, are observed. A limiting case of the most of these approximations provided friction factor estimates that are characterized by a mean absolute error of 5 × 10?4, a maximum absolute error of 4 × 10?3 whereas, a mean relative error of 1.3% and a maximum relative error of 5.8%, over the entire range of ε/D and R values, respectively. For practical purposes, the complete results for the maximum and the mean relative errors versus the 20 sets of ε/D value, are also indicated in two comparative figures. The examination results for error properties of these approximations gives one an opportunity to practically evaluate the most accurate formula among of all the previous explicit models; and showing in this way its great flexibility for estimating turbulent flow friction factor. Comparative analysis for the mean relative error profile revealed, the classification for the best-fitted six equations examined was in a good agreement with those of the best model selection criterion claimed in the recent literature, for all performed simulations.  相似文献   

17.
A cobaloxime ([chlorobis(dimethylglyoximeato)(triphenylphosphine)] cobalt (III), [Co(dmgH)2pph3Cl]) incorporated in a plasticized poly(vinyl chloride) membrane was used to develop a perchlorate-selective electrode. The influence of membrane composition on the electrode response was studied. The electrode exhibits a Nernstian response over the perchlorate concentration range 1.0 × 10−6 to 1 × 10−1 mol l−1 with a slope of −56.8 ± 0.7 mV per decade of concentration, a detection limit of 8.3 × 10−7, a wide working pH range (3–10) and a fast response time (<15 s). The electrode shows excellent selectivity towards perchlorate with respect to many common anions. The electrode was used to determine perchlorate in water and human urine.  相似文献   

18.
The utilization of mathematical and computational tools for pollutant assessment frameworks has become increasingly valuable due to the capability to interpret integrated variable measurements. Artificial neural networks (ANNs) are considered as dependable and inexpensive techniques for data interpretation and prediction. The self-organizing map (SOM) is an unsupervised ANN used for data training to classify and effectively recognize patterns embedded in the input data space. Application of SOM–ANN is useful for recognizing spatial patterns in contaminated zones by integrating chemical, physical, ecotoxicological and toxicokinetic variables in the identification of pollution sources and similarities in the quality of the samples. Water (n = 11), soil (n = 38) and sediment (n = 54) samples from four areas in the Niger Delta (Nigeria) were classified based on their chemical, toxicological and physical variables applying the SOM. The results obtained in this study provided valuable assessment using the SOM visualization capabilities and highlighted zones of priority that might require additional investigations and also provide productive pathway for effective decision making and remedial actions.  相似文献   

19.
《Applied ergonomics》2011,42(1):71-75
The amount of sleep obtained between shifts is influenced by numerous factors including the length of work and rest periods, the timing of the rest period relative to the endogenous circadian cycle and personal choices about the use of non-work time. The current study utilised a real-world live-in mining environment to examine the amount of sleep obtained when access to normal domestic, family and social activities was restricted. Participants were 29 mining operators (26 male, average age 37.4 ± 6.8 years) who recorded sleep, work and fatigue information and wore an activity monitor for a cycle of seven day shifts and seven night shifts (both 12 h) followed by either seven or fourteen days off. During the two weeks of work participants lived on-site. Total sleep time was significantly less (p < 0.01) while on-site on both day (6.1 ± 1.0 h) and night shifts (5.7 ± 1.5 h) than days off (7.4 ± 1.4 h). Further, night shift sleep was significantly shorter than day-shift sleep (p < 0.01). Assessment of subjective fatigue ratings showed that the sleep associated with both days off and night shifts had a greater recovery value than sleep associated with day shifts (p < 0.01). While on-site, participants obtained only 6 h of sleep indicating that the absence of competing domestic, family and social activities did not convert to more sleep. Factors including shift start times and circadian influences appear to have been more important.  相似文献   

20.
《Applied ergonomics》2011,42(1):91-97
The purpose of this study was to assess sleep quality and comfort of participants diagnosed with low back pain and stiffness following sleep on individually prescribed mattresses based on dominant sleeping positions. Subjects consisted of 27 patients (females, n = 14; males, n = 13; age 44.8 yrs ± SD 14.6, weight 174 lb. ±SD 39.6, height 68.3 in. ± SD 3.7) referred by chiropractic physicians for the study. For the baseline (pretest) data subjects recorded back and shoulder discomfort, sleep quality and comfort by visual analog scales (VAS) for 21 days while sleeping in their own beds. Subsequently, participants’ beds were replaced by medium-firm mattresses specifically layered with foam and latex based on the participants’ reported prominent sleeping position and they again rated their sleep comfort and quality daily for the following 12 weeks. Analysis yielded significant differences between pre- and post means for all variables and for back pain, we found significant (p < 0.01) differences between the first posttest mean and weeks 4 and weeks 8–12, thus indicating progressive improvement in both back pain and stiffness while sleeping on the new mattresses. Additionally, the number of days per week of experiencing poor sleep and physical discomfort decreased significantly. It was concluded that sleep surfaces are related to sleep discomfort and that is indeed possible to reduce pain and discomfort and to increase sleep quality in those with chronic back pain by replacing mattresses based on sleeping position.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号