首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The reliability and limits of solutions for static structural analysis depend on the accuracy of the curvature and deflection calculations. Even if the material model is close to the actual material behavior, physically unrealistic deflections or divergence problems are unavoidable in the analysis if an appropriate fundamental kinematic theory is not chosen. Moreover, accurate deflection calculation plays an important role in ultimate strength analysis where in-plane stresses are considered. Therefore, a more powerful method is needed to achieve reliable deflection calculation and modeling. For this purpose, a new advanced step was developed by coupling the elasto-plastic material behavior with precise general planar kinematic analysis. The deflection is generated precisely without making geometric assumptions or using differential equations of the deflection curve. An analytical finite strain solution was derived for an elasto-plastic prismatic/non-prismatic rectangular cross-sectioned beam under a uniform moment distribution. A comparison of the analytical results with those from the Abaqus FEM software package reveals a coherent correlation.  相似文献   

2.
This article presents the finding that when components of a system follow the Weibull or inverse Weibull distribution with a common shape parameter, then the system can be represented by a Weibull or inverse Weibull mixture model allowing negative weights. We also use an example to illustrate that the proposed mixture model can be used to approximate the reliability behaviours of the consecutive-k-out-of-n systems. The example also shows data analysis procedures when the parameters of the component life distributions are either known or unknown.  相似文献   

3.
The software reliability modeling is of great significance in improving software quality and managing the software development process. However, the existing methods are not able to accurately model software reliability improvement behavior because existing single model methods rely on restrictive assumptions and combination models cannot well deal with model uncertainties. In this article, we propose a Bayesian model averaging (BMA) method to model software reliability. First, the existing reliability modeling methods are selected as the candidate models, and the Bayesian theory is used to obtain the posterior probabilities of each reliability model. Then, the posterior probabilities are used as weights to average the candidate models. Both Markov Chain Monte Carlo (MCMC) algorithm and the Expectation–Maximization (EM) algorithm are used to evaluate a candidate model's posterior probability and for comparison purpose. The results show that the BMA method has superior performance in software reliability modeling, and the MCMC algorithm performs better than EM algorithm when they are used to estimate the parameters of BMA method.  相似文献   

4.
1 IntroductionLet the initial number of defects of a software systemunder testing beN, and theith defect is found atTi, and finally stop testingwhen thenth defectis found atTn. The basic function of the software relia-bility model is estimating or predicting software reliability from the data set of survival time (the timeto be found in testing){Ti}=T1, T2,…, Tn. Up till now, a lot of parametric models have been de-veloped. Generally speaking, these models assume that the failure data come…  相似文献   

5.
A probabilistic maintenance and repair analysis of tanker deck plates subjected to general corrosion is presented. The decisions about when to perform maintenance and repair on the structure are studied. Different practical scenarios are analyzed and optimum repair times are proposed. The optimum repair age and intervals are defined based on the statistical analysis of operational data using the Weibull model and some assumptions about the inspection and time needed for repair. The total cost is calculated in normalized form.  相似文献   

6.
Software metrics should be used in order to improve the productivity and quality of software, because they provide critical information about reliability and maintainability of the system. In this paper, we propose a cognitive complexity metric for evaluating design of object-oriented (OO) code. The proposed metric is based on an important feature of the OO systems: Inheritance. It calculates the complexity at method level considering internal structure of methods, and also considers inheritance to calculate the complexity of class hierarchies. The proposed metric is validated both theoretically and empirically. For theoretical validation, principles of measurement theory are applied since the measurement theory has been proposed and extensively used in the literature as a means to evaluate the software engineering metrics. We applied our metric on a real project for empirical validation and compared it with Chidamber and Kemerer (CK) metrics suite. The theoretical, practical and empirical validations and the comparative study prove the robustness of the measure.  相似文献   

7.
Degradation data provide a useful resource for obtaining reliability information for some highly reliable products and systems. In addition to product/system degradation measurements, it is common nowadays to dynamically record product/system usage as well as other life-affecting environmental variables, such as load, amount of use, temperature, and humidity. We refer to these variables as dynamic covariate information. In this article, we introduce a class of models for analyzing degradation data with dynamic covariate information. We use a general path model with individual random effects to describe degradation paths and a vector time series model to describe the covariate process. Shape-restricted splines are used to estimate the effects of dynamic covariates on the degradation process. The unknown parameters in the degradation data model and the covariate process model are estimated by using maximum likelihood. We also describe algorithms for computing an estimate of the lifetime distribution induced by the proposed degradation path model. The proposed methods are illustrated with an application for predicting the life of an organic coating in a complicated dynamic environment (i.e., changing UV spectrum and intensity, temperature, and humidity). This article has supplementary material online.  相似文献   

8.
One general goal of sensitivity or uncertainty analysis of a computer model is the determination of which inputs most influence the outputs of interest. Simple methodologies based on randomly sampled input values are attractive because they require few assumptions about the nature of the model. However, when the number of inputs is large and the computational effort required per model evaluation is significant, methods based on more complex assumptions, analysis techniques, and/or sampling plans may be preferable. This paper will review some approaches that have been proposed for input screening, with an emphasis on the balance between assumptions and the number of model evaluations required.  相似文献   

9.
It is widely accepted that more widespread use of object‐oriented techniques can only come about when there are techniques and tool systems that provide design support beyond visualizing code. Distinct software metrics are considered as being able to support the design by indicating critical components with respect to various quality factors such as maintainability and reliability. Unfortunately, many object‐oriented metrics were defined and applied to classroom projects, but no evidence was given that the metrics are useful and applicable—both from an experience viewpoint and from a tools viewpoint—for industrial object‐oriented development. Distinct complexity metrics have been developed and integrated in a Smalltalk development support system called SmallMetric. Thus we achieved a basis for software analysis (metrics) and development support (critique) of Smalltalk systems. The main concepts of the environment including the underlying metrics are explained, its use and operation are discussed and some results of the implementation and its application to several industrial projects are given with examples. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

10.
Assessing the quality of scientific conferences is an important and useful service that can be provided by digital libraries and similar systems. This is specially true for fields such as Computer Science and Electric Engineering, where conference publications are crucial. However, the majority of the existing quality metrics, particularly those relying on bibliographic citations, has been proposed for measuring the quality of journals. In this article we conduct a study about the relative performance of existing journal metrics in assessing the quality of scientific conferences. More importantly, departing from a deep analysis of the deficiencies of these metrics, we propose a new set of quality metrics especially designed to capture intrinsic and important aspects related to conferences, such as longevity, popularity, prestige, and periodicity. To demonstrate the effectiveness of the proposed metrics, we have conducted two sets of experiments that contrast their results against a “gold standard” produced by a large group of specialists. Our metrics obtained gains of more than 12% when compared to the most consistent journal quality metric and up to 58% when compared to standard metrics such as Thomson’s Impact Factor.  相似文献   

11.
K. K. Aggarwal 《Sadhana》1987,11(1-2):155-165
The complexity of computer communication networks has taken a dramatic upswing, following significant developments in electronic technology such as medium and large scale integrated circuits and microprocessors. Although components of a computer communication network are broadly classified into software, hardware and communications, the most important problem is that of ensuring the reliable flow of information from source to destination. An important parameter in the analysis of these networks is to find the probability of obtaining a situation in which each node in the network communicates with all other remaining communication centres (nodes). This probability, termed as overall reliability, can be determined using the concept of spanning trees. As the exact reliability evaluation becomes unmanageable even for a reasonable sized system, we present an approximate technique using clustering methods. It has been shown that when component reliability ⩾ 0.9, the suggested technique gives results quite close to those obtained by exact methods with an enormous saving in computation time and memory usage. For still quicker reliability analysis while designing the topological configuration of real-time computer systems, an empirical form of the reliability index is proposed which serves as a fairly good indicator of overall reliability and can be easily incorporated in a design procedure, such as local search, to design maximally reliable computer communication network.  相似文献   

12.
For costly and dangerous experiments, growing attention has been paid to the problem of the reliability analysis of zero‐failure data, with many new findings in world countries, especially in China. The existing reliability theory relies on the known lifetime distribution, such as the Weibull distribution and the gamma distribution. Thus, it is ineffective if the lifetime probability distribution is unknown. For this end, this article proposes the grey bootstrap method in the information poor theory for the reliability analysis of zero‐failure data under the condition of a known or unknown probability distribution of lifetime. The grey bootstrap method is able to generate many simulated zero‐failure data with the help of few zero‐failure data and to estimate the lifetime probability distribution by means of an empirical failure probability function defined in this article. The experimental investigation presents that the grey bootstrap method is effective in the reliability analysis only with the few zero‐failure data and without any prior information of the lifetime probability distribution. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

13.
Ran Cao  Wei Hou  Yanying Gao 《工程优选》2018,50(9):1453-1469
This article presents a three-stage approach for solving multi-objective system reliability optimization problems considering uncertainty. The reliability of each component is considered in the formulation as a component reliability estimate in the form of an interval value and discrete values. Component reliability may vary owing to variations in the usage scenarios. Uncertainty is described by defining a set of usage scenarios. To address this problem, an entropy-based approach to the redundancy allocation problem is proposed in this study to identify the deterministic reliability of each component. In the second stage, a multi-objective evolutionary algorithm (MOEA) is applied to produce a Pareto-optimal solution set. A hybrid algorithm based on k-means and silhouettes is performed to select representative solutions in the third stage. Finally, a numerical example is presented to illustrate the performance of the proposed approach.  相似文献   

14.
Stochastic simulation is a commonly used tool by practitioners for evaluating the performance of inventory policies. A typical inventory simulation starts with the determination of the best-fit input models (e.g. probability distribution function of the demand random variable) and then obtains a performance measure estimate under these input models. However, this sequential approach ignores the uncertainty around the input models, leading to inaccurate performance measures, especially when there is limited historical input data. In this paper, we take an alternative approach and propose a simulation replication algorithm that jointly estimates the input models and the performance measure, leading to a credible interval for the performance measure under input-model uncertainty. Our approach builds on a nonparametric Bayesian input model and frees the inventory manager from making any restrictive assumptions on the functional form of the input models. Focusing on a single-product inventory simulation, we show that the proposed method improves the estimation of the service levels when compared to the traditional practice of using the best-fit or the empirical distribution as the unknown demand distribution.  相似文献   

15.
Light‐emitting diode (LED) lamp has received great attention as a potential replacement for the more commercially available lighting technology, such as incandescence and fluorescence lamps. LED which is the main component of LED lamp has a very long lifetime. This means that no or very few failures are expected during LED lamp testing. Therefore, degradation testing and modelling are needed. Because the complexity of modern lighting system is increasing, it is possible that more than one degradation failures dominate the system reliability. If degradation paths of the system's performance characteristics (PCs) tend to be comonotone there is a likely dependence between the PCs because of the system's common usage history. In this paper, a bivariate constant stress degradation data model is proposed. The model accommodates assumptions of dependency between PCs and allows the use of different marginal degradation distribution functions. Consequently, a better system reliability estimation can be expected from this model than from a model with independent PCs assumption. The proposed model is applied to an actual LED lamps experiment data. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
Many manufacturing processes have various factors that affect the quality of the products, and the analysis and optimisation of these factors are critical activities for engineers. Although much research has been done on statistical methods to investigate the effects of these factors on quality metrics, these statistical methods are not always applied in real-world situations because of problems involving data integrity, lack of control/measurement, or technical/administrative constraints. On the other hand, conventional heuristic methods for the selection of critical quality factors are mostly devoid of metrics that can be examined objectively. This study, therefore, implements the analytic hierarchy process (AHP) for the quantitative prioritisation of the control factors involved in a flat end milling manufacturing process. In order to validate the metrics synthesised from the experience of skilled workers, the decision making is followed by a multivariate analysis of the variance based on the general linear model (GLM). The results show that AHP is able to provide fairly reliable metrics about the contribution of process parameters, and the group-wise judgment of qualified experts can improve the consistency of prioritisation.  相似文献   

17.
Early prediction of software reliability provides basis for evaluating potential reliability during early stages of a project. It also assists in evaluating the feasibility of proposed reliability requirements and provides a rational basis for design and allocation decisions. Many researchers have proposed different approaches to predict the software reliability based on a Markov model. The transition probabilities in between the states of the Markov model are input parameters to predict the software reliability. In the existing approaches, these probabilities are either assumed on some knowledge or computed using analytical method, and hence, it does not give accurate predicted reliability figure. Some authors compute them using operational profile data, but that is possible only after the deployment of the software, and this is not early prediction. The work in this paper is devoted to demonstrate the computation of transition probability in the Markov reliability model taking a case study. The proposed approach has been validated on 47 sets of real data. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
This paper discusses the results of a pilot project investigating Russian scholarly publications using the altmetric indicators “Usage Count Last 180 days” (U1) and “Usage Count Since 2013” (U2) introduced by Web of Science. We explored the relationship between citation impact and both types of usage counts. The data set consisted of 37,281 records (publications) indexed by SCI-E in 2015. Seven broad research areas were selected to observe citation patterns and usage counts. A significant difference was found between mean citations and mean usage counts (U2) in a few research areas. We discovered a significant Kendall rank correlation between the citation metrics and usage metrics at the article level. This correlation is particularly strong for the longer period usage metric (U2). We also analyzed the relationship between usage metrics and traditional journal-level citation metrics. Very weak correlation was observed.  相似文献   

19.
It is important to monitor manufacturing processes in order to improve product quality and reduce production cost. Statistical Process Control (SPC) is the most commonly used method for process monitoring, in particular making distinctions between variations attributed to normal process variability to those caused by ‘special causes’. Most SPC and multivariate SPC (MSPC) methods are parametric in that they make assumptions about the distributional properties and autocorrelation structure of in-control process parameters, and, if satisfied, are effective in managing false alarms/-positives and false-negatives. However, when processes do not satisfy these assumptions, the effectiveness of SPC methods is compromised. Several non-parametric control charts based on sequential ranks of data depth measures have been proposed in the literature, but their development and implementation have been rather slow in industrial process control. Several non-parametric control charts based on machine learning principles have also been proposed in the literature to overcome some of these limitations. However, unlike conventional SPC methods, these non-parametric methods require event data from each out-of-control process state for effective model building. The paper presents a new non-parametric multivariate control chart based on kernel distance that overcomes these limitations by employing the notion of one-class classification based on support vector principles. The chart is non-parametric in that it makes no assumptions regarding the data probability density and only requires ‘normal’ or in-control data for effective representation of an in-control process. It does, however, make an explicit provision to incorporate any available data from out-of-control process states. Experimental evaluation on a variety of benchmarking datasets suggests that the proposed chart is effective for process monitoring.  相似文献   

20.
LCA数据清单分析研究   总被引:10,自引:4,他引:6  
在分析生命周期评价的技术框架基础上,探讨了LCA中数据收集及编制数据清单分析中若干问题。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号