首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We applied our recently developed kinetic computational mutagenesis (KCM) approach [L.T. Chong, W.C. Swope, J.W. Pitera, V.S. Pande, Kinetic computational alanine scanning: application to p53 oligomerization, J. Mol. Biol. 357 (3) (2006) 1039–1049] along with the MM-GBSA approach [J. Srinivasan, T.E. Cheatham 3rd, P. Cieplak, P.A. Kollman, D.A. Case, Continuum solvent studies of the stability of DNA, RNA, and phosphoramidate-DNA helices, J. Am. Chem. Soc. 120 (37) (1998) 9401–9409; P.A. Kollman, I. Massova, C.M. Reyes, B. Kuhn, S. Huo, L.T. Chong, M. Lee, T. Lee, Y. Duan, W. Wang, O. Donini, P. Cieplak, J. Srinivasan, D.A. Case, T.E. Cheatham 3rd., Calculating structures and free energies of complex molecules: combining molecular mechanics and continuum models, Acc. Chem. Res. 33 (12) (2000) 889–897] to evaluate the effects of all possible missense mutations on dimerization of the oligomerization domain (residues 326–355) of tumor suppressor p53. The true positive and true negative rates for KCM are comparable (within 5%) to those of MM-GBSA, although MM-GBSA is much less computationally intensive when it is applied to a single energy-minimized configuration per mutant dimer. The potential advantage of KCM is that it can be used to directly examine the kinetic effects of mutations.  相似文献   

2.
In silico models that predict the rate of human renal clearance for a diverse set of drugs, that exhibit both active secretion and net re-absorption, have been produced using three statistical approaches. Partial Least Squares (PLS) and Random Forests (RF) have been used to produce continuous models whereas Classification And Regression Trees (CART) has only been used for a classification model. The best models generated from either PLS or RF produce significant models that can predict acids/zwitterions, bases and neutrals with approximate average fold errors of 3, 3 and 4, respectively, for an independent test set that covers oral drug-like property space. These models contain additional information on top of any influence arising from plasma protein binding on the rate of renal clearance. Classification And Regression Trees (CART) has been used to generate a classification tree leading to a simple set of Renal Clearance Rules (RCR) that can be applied to man. The rules are influenced by lipophilicity and ion class and can correctly predict 60% of an independent test set. These percentages increase to 71% and 79% for drugs with renal clearances of < 0.1 ml/min/kg and > 1 ml/min/kg, respectively. As far as the authors are aware these are the first set of models to appear in the literature that predict the rate of human renal clearance and can be used to manipulate molecular properties leading to new drugs that are less likely to fail due to renal clearance.  相似文献   

3.
Predictive modeling in medicine involves the development of computational models which are capable of analysing large amounts of data in order to predict healthcare outcomes for individual patients. Computational intelligence approaches are suitable when the data to be modelled are too complex for conventional statistical techniques to process quickly and efficiently. These advanced approaches are based on mathematical models that have been especially developed for dealing with the uncertainty and imprecision which is typically found in clinical and biological datasets. This paper provides a survey of recent work on computational intelligence approaches that have been applied to prostate cancer predictive modeling, and considers the challenges which need to be addressed. In particular, the paper considers a broad definition of computational intelligence which includes metaheuristic optimisation algorithms (also known as nature inspired algorithms), Artificial Neural Networks, Deep Learning, Fuzzy based approaches, and hybrids of these, as well as Bayesian based approaches, and Markov models. Metaheuristic optimisation approaches, such as the Ant Colony Optimisation, Particle Swarm Optimisation, and Artificial Immune Network have been utilised for optimising the performance of prostate cancer predictive models, and the suitability of these approaches are discussed.  相似文献   

4.
During past years, the so-called resource pooling principle in data networks has been studied more carefully. For example, the recent work on routing on the Internet over multiple paths and Multipath TCP both seek to make the best possible use of multiple connecting paths between two end points. In deployments where multiple users could share multiple paths, one of the very first questions that comes to mind is, should we schedule packets from the users on a per-flow or per-packet basis? In this paper we study networking scenarios in which several networks are connected to each other via multiple paths. We seek to understand how a multi-homed router should schedule packets and packet flows out towards other networks. Our primary interests are to study path utilization and analyze the bandwidth fairness of various approaches using different traffic loads.  相似文献   

5.
Neural Computing and Applications - The piano key weir (PK-Weir) is a hydraulic structure used in the irrigation system by its construction on artificial or natural channels such as rivers or...  相似文献   

6.
基于电化学阻抗技术构建了一种检测p53抑癌基因的电化学传感器.首先利用自组装作用将末端带巯基的探针p53抑癌基因(p53-DNA)固定于金电极表面,根据传感器结合前后电子转移阻抗值的变化,对目标p53-DNA进行了测定.以pH =7.4的5mmol/L K3[Fe(CN)6]-5mmol/L K4[Fe(CN)6]平衡电对溶液作为检测液,检测目标p53-DNA的线性浓度范围为1.0×10-8~ 1.0×10-6 mol/L,其检出限为3.0×10-9 mol/L(S/N =3).对5.0×10-8 mol/L的目标p53-DNA进行11次平行测定,其RSD为3.4%.该传感器制作简单、灵敏高、选择性好,且无需标记,易于操作.  相似文献   

7.
Binary tomography represents a special category of tomographic problems, in which only two values are possible for the sought image pixels. The binary nature of the problems can potentially lead to a significant reduction in the number of view angles required for a satisfactory reconstruction, thusly enabling many interesting applications. However, the limited view angles result in a severely underdetermined system of equations, which is challenging to solve. Various approaches have been proposed to address such a challenge, and two categories of approaches include those based on optimization and those based on algebraic iteration. However, the relative strengths, limitations, and applicable ranges of these approaches have not been clearly defined in the past. Therefore, it is the main objective of this work to conduct a systematic comparison of approaches from each category. This comparison suggested that the approaches based on algebraic iteration offered both superior reconstruction fidelity and computation efficiency at low (two or three) view angles, and these advantages diminished at high view angles. Meanwhile, this work also investigated the application of regularization techniques, the selection of optimal regularization parameter, and the use of a local search technique for binary problems. We expect the results and conclusions reported in this work to provide valuable guidance for the design and development of algorithms for binary tomography problems.  相似文献   

8.
Zoran  Igor   《Data & Knowledge Engineering》2008,67(3):504-516
The paper compares different approaches to estimate the reliability of individual predictions in regression. We compare the sensitivity-based reliability estimates developed in our previous work with four approaches found in the literature: variance of bagged models, local cross-validation, density estimation, and local modeling. By combining pairs of individual estimates, we compose a combined estimate that performs better than the individual estimates. We tested the estimates by running data from 28 domains through eight regression models: regression trees, linear regression, neural networks, bagging, support vector machines, locally weighted regression, random forests, and generalized additive model. The results demonstrate the potential of a sensitivity-based estimate, as well as the local modeling of prediction error with regression trees. Among the tested approaches, the best average performance was achieved by estimation using the bagging variance approach, which achieved the best performance with neural networks, bagging and locally weighted regression.  相似文献   

9.
气体传感器阵列的交叉敏感性严重影响气体传感器对混合气体的测量。用M atlab平台的神经网络工具箱,分别构建了BP,径向基(RBF)和模糊(FNN)神经网络,利用掺杂不同材料的4种SnO2气体传感器组成阵列,实现对甲醛、甲苯、丙酮和乙醇混合气体的体积分数预测。结果表明:FNN神经网络对混合气体体积分数预测的精度要高于其他2种网络。而且,结合PCA和ICA对数据样本进行预处理,有利于提高神经网络对体积分数预测的精度。  相似文献   

10.
11.
神经网络是求解作业车间调度问题的一种有效方法,本文研究可以获得全局最优或近似全局最优的可行解的作业车问调度神经网络方法.给出包括作业车间调度所有约束条件的新的计算能量函数表达式,并把混沌动力学应用于离散Hopfield神经网络作业车间调度中,提出一种改进的暂态混沌离散神经网络作业车间调度方法.仿真结果表明,该方法不仅具有全局搜索能力,而且收敛速度较快,重要的是能够保证神经网络的稳态输出为全局最优或近似全局最优的可行的作业车间调度方案.  相似文献   

12.
Dimensional scaling approaches are widely used to develop multi-body human models in injury biomechanics research. Given the limited experimental data for any particular anthropometry, a validated model can be scaled to different sizes to reflect the biological variance of population and used to characterize the human response. This paper compares two scaling approaches at the whole-body level: one is the conventional mass-based scaling approach which assumes geometric similarity; the other is the structure-based approach which assumes additional structural similarity by using idealized mechanical models to account for the specific anatomy and expected loading conditions. Given the use of exterior body dimensions and a uniform Young’s modulus, the two approaches showed close values of the scaling factors for most body regions, with 1.5 % difference on force scaling factors and 13.5 % difference on moment scaling factors, on average. One exception was on the thoracic modeling, with 19.3 % difference on the scaling factor of the deflection. Two 6-year-old child models were generated from a baseline adult model as application example and were evaluated using recent biomechanical data from cadaveric pediatric experiments. The scaled models predicted similar impact responses of the thorax and lower extremity, which were within the experimental corridors; and suggested further consideration of age-specific structural change of the pelvis. Towards improved scaling methods to develop biofidelic human models, this comparative analysis suggests further investigation on interior anatomical geometry and detailed biological material properties associated with the demographic range of the population.  相似文献   

13.
Ion channels play a crucial role in the cardiovascular system. Our understanding of cardiac ion channel function has improved since their first discoveries. The flow of potassium, sodium and calcium ions across cardiomyocytes is vital for regular cardiac rhythm. Blockage of these channels, delays cardiac repolarization or tend to shorten repolarization and may induce arrhythmia. Detection of drug risk by channel blockade is considered essential for drug regulators. Advanced computational models can be used as an early screen for torsadogenic potential in drug candidates. New drug candidates that are determined to not cause blockage are more likely to pass successfully through preclinical trials and not be withdrawn later from the marketplace by manufacturer. Several different approved drugs, however, can cause a distinctive polymorphic ventricular arrhythmia known as torsade de pointes (TdP), which may lead to sudden death. The objective of the present study is to review the mechanisms and computational models used to assess the risk that a drug may TdP. Key points: There is strong evidence from multiple studies that blockage of the L-type calcium current reduces risk of TdP. Blockage of sodium channels slows cardiac action potential conduction, however, not all sodium channel blocking antiarrhythmic drugs produce a significant effect, while late sodium channel block reduces TdP. Interestingly, there are some drugs that block the hERG potassium channel and therefore cause QT prolongation, but they are not associated with TdP. Recent studies confirmed the necessity of studying multiple distinctionic ion channels which are responsible for cardiac related diseases or TdP, to obtain an improved clinical TdP risk prediction of compound interactions and also for designing drugs.  相似文献   

14.
The reflected gradient method and the Newton trajectory method are approaches to compute the closest unstable equilibrium point (UEP) for stability region estimation. We address the computational issues involved in these methods. We first suggest a dynamic gradient approach as a unified and extended version of these methods. Then, we show that computing the closest UEP using the dynamic gradient approach can be infeasible.  相似文献   

15.
This paper focuses on the generation of a three-dimensional (3D) mesh sizing function for geometry-adaptive finite element (FE) meshing. The mesh size at a point in the domain of a solid depends on the geometric complexity of the solid. This paper proposes a set of tools that are sufficient to measure the geometric complexity of a solid. Discrete skeletons of the input solid and its surfaces are generated, which are used as tools to measure the proximity between geometric entities and feature size. The discrete skeleton and other tools, which are used to measure the geometric complexity, generate source points that determine the size and local sizing function at certain points in the domain of the solid. An octree lattice is used to store the sizing function as it reduces the meshing time. The size at every lattice-node is calculated by interpolating the size of the source points. The algorithm has been tested on many industrial models, and it can be extended to consider other non-geometric factors that influence the mesh size, such as physics, boundary conditions, etc.Sandia National Laboratory is a multiprogram laboratory operated by the Sandia Corporation, a Lockheed Martin Company, for the US Department of Energy under contract DE-AC04-94AL85000.  相似文献   

16.
The C4 composition of Canadian mixed-grass communities is more sensitive to environmental change than other grasslands. Reliable methods of detecting such changes are necessary if these landscapes are to be properly managed. One approach is to use satellite remote sensing systems. Various studies have shown that the asynchronous seasonality of C3 and C4 species allows the relative abundance of each photosynthetic type to be estimated using temporal trajectory indices (TTIs) of sensor-derived normalized difference vegetation index (NDVI). In this study, we compared three approaches for predicting C4 species cover at Grasslands National Park (GNP) (Saskatchewan, Canada). TTIs related to Approach I were calculated from plots of NDVI vs. day-of-year (DOY). TTIs related to Approach II were calculated from plots of normalized cumulative NDVI vs. growing degree day (GDD). TTIs related to Approach III were calculated as ratios of early-season NDVI to late-season NDVI. Our analyses were conducted at two separate ecological scales. A within-community analysis used field-sampled data from upland grassland to compare techniques at sampling resolutions of 0.5, 2.5, 10, and 50 m. An across-community analysis compared techniques using a vegetation survey of the GNP region and TTIs calculated from Advanced Very High Resolution Radiometer (AVHRR) data (1 km). At both scales, TTIs related to the timing of specific phenological events were the best predictors of C4 species cover. While all techniques performed well in the within-community study, Approach III performed best. Here, the predictive ability of each approach was weak at a resolution of 0.5 m but stronger at 2.5, 10, and 50 m resolutions. We also found that the optimal sampling dates for Approach III fell within a certain GDD range. This is encouraging for the a priori selection of sample dates, which would make the need for full seasonal time series redundant. In the across-community analysis, the AVHRR-derived Approach II TTIs were better able to discriminate among grasslands of different C4 composition than any other technique (overall accuracy=74%). However, for some C4 cover classes, the predictive accuracy of this approach was low. While these results are encouraging for the use of spectral data in monitoring the C4 cover of northern prairie, various research issues remain. At the within-community level, these include (a) further attempts to define objective criteria for the a priori identification of sampling dates for Approach III, and (b) and the extension of such studies to other growing seasons and community types/grassland regions. At the across-community level, these include the expansion of such techniques to a larger geographical region that contains a wider range in C4 cover values and land use types (e.g. ungrazed vs. grazed grasslands).  相似文献   

17.
讨论了多种在LAMP(Linux+Apache+Mysqi+PHP)平台下,使用PHP(超文本预处理器)实现对待下载的文件进行加密并在下载后解密的方法,介绍了PHP以及LAMP平台的应用情况;研究和介绍了在LAMP平台下,通过调用PHP内置函数、扩展/类库以及GnuPG(GNU privacy guard)软件实现数据加密解密的方法,并比较分析了不同方法之间的优缺点,给出了相应的示例代码,以及针对各种方法的比较分析和实验结果;讨论了如何选择不同的加密方法来适用于不同的具体环境,并指出了PHP加密的局限性.  相似文献   

18.
Presents a training algorithm for probabilistic neural networks (PNN) using the minimum classification error (MCE) criterion. A comparison is made between the MCE training scheme and the widely used maximum likelihood (ML) learning on a cloud classification problem using satellite imagery data.  相似文献   

19.
Opting to follow the computing-design philosophy that the best way to reduce power consumption and increase energy efficiency is to reduce waste, we propose an architecture with a very simple ready-implementation by using an NComputing device that can allow multi-users but only one computer is needed. This intuitively can save energy, space as well as cost. In this paper, we propose a simple and realistic NComputing architecture to study the energy and power-efficient consumption of desktop computer systems by using the NComputing device. We also propose new approaches to estimate the reliability of k-out-of-n systems based on the delta method. The k-out-of-n system consisting of n subsystems works if and only if at least k-of-the-n subsystems work. More specificly, we develop approaches to obtain the reliability estimation for the k-out-of-n systems which is composed of n independent and identically distributed subsystems where each subsystem (or energy-efficient usage application) can be assumed to follow a two-parameter exponential lifetime distribution function. The detailed derivations of reliability estimation of k-out-of-n systems based on the biased-corrected estimator, known as delta method, the uniformly minimum variance unbiased estimate (UMVUE) and maximum likelihood estimate (MLE) are discussed. An energy-management NComputing application is discussed to illustrate the reliability results in terms of the energy consumption usages of a computer system with quad-core, 8GB of RAM, and a GeForce 9800GX-2 graphics card to perform various complex applications. The estimated reliability values of systems based on the UMVUE and the delta method differ only slightly. Often the UMVUE of reliability for a complex system is a lot more difficult to obtain, if not impossible. The delta method seems to be a simple and better approach to obtain the reliability estimation of complex systems. The results of this study also show that, in practice, the NComputing architecture improves both energy cost saving and energy efficient living spaces.  相似文献   

20.
The worst value of the quantile of the distribution of the linear loss function which depends on the uncertain stochastic parameters was compared with the maximum value of this function. The stochastic uncertainty is modelled by distributions from the Barmish class.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号