首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A new unstructured mesh coarsening algorithm has been developed for use in conjunction with multilevel methods. The algorithm preserves geometrical and topological features of the domain, and retains a maximal independent set of interior vertices to produce good coarse mesh quality. In anisotropic meshes, vertex selection is designed to retain the structure of the anisotropic mesh while reducing cell aspect ratio. Vertices are removed incrementally by contracting edges to zero length. Each vertex is removed by contracting the edge that maximizes the minimum sine of the dihedral angles of cells affected by the edge contraction. Rarely, a vertex slated for removal from the mesh cannot be removed; the success rate for vertex removal is typically 99.9% or more. For two‐dimensional meshes, both isotropic and anisotropic, the new approach is an unqualified success, removing all rejected vertices and producing output meshes of high quality; mesh quality degrades only when most vertices lie on the boundary. Three‐dimensional isotropic meshes are also coarsened successfully, provided that there is no difficulty distinguishing corners in the geometry from coarsely‐resolved curved surfaces; sophisticated discrete computational geometry techniques appear necessary to make that distinction. Three‐dimensional anisotropic cases are still problematic because of tight constraints on legal mesh connectivity. More work is required to either improve edge contraction choices or to develop an alternative strategy for mesh coarsening for three‐dimensional anisotropic meshes. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

2.
The theory of network reliability has been applied to many complicated network structures, such as computer and communication networks, piping systems, electricity networks, and traffic networks. The theory is used to evaluate the operational performance of networks that can be modeled by probabilistic graphs. Although evaluating network reliability is an Non‐deterministic Polynomial‐time hard problem, numerous solutions have been proposed. However, most of them are based on sequential computing, which under‐utilizes the benefits of multi‐core processor architectures. This paper addresses this limitation by proposing an efficient strategy for calculating the two‐terminal (terminal‐pair) reliability of a binary‐state network that uses parallel computing. Existing methods are analyzed. Then, an efficient method for calculating terminal‐pair reliability based on logical‐probabilistic calculus is proposed. Finally, a parallel version of the proposed algorithm is developed. This is the first study to implement an algorithm for estimating terminal‐pair reliability in parallel on multi‐core processor architectures. The experimental results show that the proposed algorithm and its parallel version outperform an existing sequential algorithm in terms of execution time. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
A simple method is proposed to search for all minimal cutsets (MCs ) for imperfect networks reliability subject to both arc and node failures under the condition that all of the MCs in the network with perfect nodes are given in advance. The proposed method does not require re-enumeration for all of the MCs for additional node failure consideration. All of the MC candidates found in the proposed algorithm are actual MCs without any need for further verification. This algorithm is more effective than the existing algorithm in which every MC candidate is not verified as a MC. No identical MCs are found using the proposed algorithm, which does not duplicate MCs and is more efficient than the existing methods. Only simple concepts are used to implement the proposed algorithm, which makes it easier to understand and implement. With considering unreliable nodes, the proposed method is also more realistic and valuable for reliability analysis in an existing network. The correctness of the proposed algorithm will be analyzed and proven. One example is used to illustrate how all MCs are generated in a network with arc and node failures solved using the proposed algorithm.  相似文献   

4.
Communication reliability of wireless sensor networks (WSNs) is essential to ensure the correct and reliable operation of the network. Two distinct communication paradigms exist in WSNs: infrastructure communication and application communication, and a practical communication task typically involves both types of communications. To the best of our knowledge, no reliability studies on WSNs have been dedicated to combining the two communication paradigms. In this paper, we advance the state‐of‐the‐art by proposing a phased‐mission framework to analyze the communication reliability of WSNs considering both infrastructure communication and application communication, as well as K‐coverage requirements. WSNs containing two types of sensor nodes (energy harvesting sensor nodes and battery‐powered sensor nodes) are modeled. Corresponding to the two types of sensor nodes, two different link reliability models are first presented. Binary decision diagram (BDD) based algorithms are then developed for the phased‐mission communication reliability analysis of WSNs. Case studies are given to illustrate the application of the proposed algorithms. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
A MP/minimal cutset (MC) is a path/cut set such that if any edge is removed from this path/cut set, then the remaining set is no longer a path/cut set. An intuitive method is proposed to evaluate the reliability in terms of MCs in a stochastic-flow network subject to both edge and node failures under the condition that all of the MCs are given in advance. This is an extension of the best of known algorithms for solving the d-MC (a special MC but formatted in a system-state vector, where d is the lower bound points of the system capacity level) problem from the stochastic-flow network without unreliable nodes to with unreliable nodes by introducing some simple concepts. These concepts were first developed in the literature to implement the proposed algorithm to reduce the number of d-MC candidates. This method is more efficient than the best of known existing algorithms regardless if the network has or does not have unreliable nodes. Two examples are illustrated to show how the reliability is determined using the proposed algorithm in the network with or without unreliable nodes. The computational complexity of the proposed algorithm is analyzed and compared with the existing methods.  相似文献   

6.
Identification of interaction patterns in complex networks via community structures has gathered a lot of attention in recent research studies. Local community structures provide a better measure to understand and visualise the nature of interaction when the global knowledge of networks is unknown. Recent research on local community structures, however, lacks the feature to adjust itself in the dynamic networks and heavily depends on the source vertex position. In this study the authors propose a novel approach to identify local communities based on iterative agglomeration and local optimisation. The proposed solution has two significant improvements: (i) in each iteration, agglomeration strengthens the local community measure by selecting the best possible set of vertices, and (ii) the proposed vertex and community rank criterion are suitable for the dynamic networks where the interactions among vertices may change over time. In order to evaluate the proposed algorithm, extensive experiments and benchmarking on computer generated networks as well as real-world social and biological networks have been conducted. The experiment results reflect that the proposed algorithm can identify local communities, irrespective of the source vertex position, with more than 92% accuracy in the synthetic as well as in the real-world networks.  相似文献   

7.
Computer networks and power transmission networks are treated as capacitated flow networks. A capacitated flow network may partially fail due to maintenance. Therefore, the capacity of each edge should be optimally assigned to face critical situations—i.e., to keep the network functioning normally in the case of failure at one or more edges. The robust design problem (RDP) in a capacitated flow network is to search for the minimum capacity assignment of each edge such that the network still survived even under the edge’s failure. The RDP is known as NP-hard. Thus, capacity assignment problem subject to system reliability and total capacity constraints is studied in this paper. The problem is formulated mathematically, and a genetic algorithm is proposed to determine the optimal solution. The optimal solution found by the proposed algorithm is characterized by maximum reliability and minimum total capacity. Some numerical examples are presented to illustrate the efficiency of the proposed approach.  相似文献   

8.
Because of the environments in which they will operate, future autonomous systems must be capable of reconfiguring quickly and safely following faults or environmental changes. Past research has shown how, by considering autonomous systems to perform phased missions, reliability analysis can support decision making by allowing comparison of the probability of success of different missions following reconfiguration. Binary decision diagrams (BDDs) offer fast, accurate reliability analysis that could contribute to real‐time decision making. However, phased mission analysis using existing BDD models is too slow to contribute to the instant decisions needed in time‐critical situations. This paper investigates 2 aspects of BDD models that affect analysis speed: variable ordering and quantification efficiency. Variable ordering affects BDD size, which directly affects analysis speed. Here, a new ordering scheme is proposed for use in the context of a decision‐making process. Variables are ordered before a mission, and reordering is unnecessary no matter how the mission configuration changes. Three BDD models are proposed to address the efficiency and accuracy of existing models. The advantages of the developed ordering scheme and BDD models are demonstrated in the context of their application within a reliability analysis methodology used to support decision making in an unmanned aerial vehicle.  相似文献   

9.
The ordering of basic events is critical to fault tree analysis on the basis of binary decision diagrams (BDDs). Many attempts have been made to seek an efficient ordering result with the aim of reducing the complexity of BDD. In this article, a new ordering method, namely, priority ordering method, is proposed. The new method takes into account not only the effects of the layers of fault tree but also the repeated events, the neighboring events, and the number of events under the same gate. According to these four effects, the priorities that sort the basic events of the fault tree are defined. The new method inherits the merits of structure‐based and weight‐based methods. It is able to evaluate the basic events on the basis of the structure‐based method and the size of the subtree on the basis of the weighted‐based method. Demonstrated by the examples, the proposed priority ordering method is superior to the existing ordering methods in terms of reducing the nodes in the BDD and improving the efficiency in transforming a fault tree to a BDD. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

10.
$h$-限制性边连通度是衡量大型互连网络可靠性和容错性的一个重要参数。设 $G$ 是连通图且 $h$ 是非负整数,如果 $G$ 中存在某种边子集,使得 $G$ 删除这种边子集后得到的图不连通并且每个分支中点的度至少是 $h$,则所有这种边子集中基数最小的边子集的基数称为图 $G$ 的 $h$-限制性边连通度。$n$-维折叠交叉立方体是由 $n$-维交叉立方体增加一些补边后所得。对于此类问题,首先利用 $2$-限制性边连通度作为可靠性的重要度量,对折叠交叉立方体网络的可靠性进行分析,然后得到折叠交叉立方体的 $2$-限制性边连通度,最后证明并确定 $n$-维折叠交叉立方体的 $2$-限制性边连通度等于 $4n-4 (n\geq 4)$。这个结果意味着,为了使 $n$-维折叠交叉立方体不连通且每个分支中没有度数小于 2 的点,至少应有 $4n-4$ 条边同时发生故障。  相似文献   

11.
We propose an algorithm for optimization under uncertainty with joint reliability constraints. Most existing research formulates constraints of random variables/parameters in probabilistic forms such that the probability of individual constraint satisfaction must be higher than a reliability target. However, engineering problems generally define reliability as the probability of satisfying constraints ‘jointly’ rather than individually. Calculating a joint reliability value requires a multiple integration over the feasible domain. This calculation is in most realistic cases impractical by use of exact methods and has become the major challenge in optimization. In this research we propose a filter‐based sequential quadratic programming algorithm with joint reliability constraints approximated by their upper bounds. This upper bound can be obtained analytically by considering the current design location, the reliability of satisfying each constraint, and the angles between every constraint pair. To further improve the efficiency of the algorithm, active‐set strategies are implemented such that intense reliability calculations only required for constraints that are likely to be active. The rest of the constraints are approximated to the level needed to understand whether constraints might become active in the next iteration. The efficiency of the proposed method enables the applications to general complex engineering problems. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
The objective of this paper is to present an efficient computational methodology for the reliability optimization of electronic devices under cost constraints. The system modeling for calculating the reliability indices of the electronic devices is based on Bayesian networks using the fault tree approach, in order to overcome the limitations of the series–parallel topology of the reliability block diagrams. Furthermore, the Bayesian network modeling for the reliability analysis provides greater flexibility for representing multiple failure modes and dependent failure events, and simplifies fault diagnosis and reliability allocation. The optimal selection of components is obtained using the simulated annealing algorithm, which has proved to be highly efficient in complex optimization problems where gradient‐based methods can not be applied. The reliability modeling and optimization methodology was implemented into a computer program in Matlab using a Bayesian network toolbox. The methodology was applied for the optimal selection of components for an electrical switch of power installations under reliability and cost constraints. The full enumeration of the solution space was calculated in order to demonstrate the efficiency of the proposed optimization algorithm. The results obtained are excellent since a near optimum solution was found in a small fraction of the time needed for the complete enumeration (3%). All the optimum solutions found during consecutive runs of the optimization algorithm lay in the top 0.3% of the solutions that satisfy the reliability and cost constraints. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

13.
We propose a general method to predict functions of vertices where (i) the wiring of the network is somehow related to the vertex functionality and (ii) a fraction of the vertices are functionally classified. The method is influenced by role-similarity measures of social network analysis. The two versions of our prediction scheme are tested on model networks where the functions of the vertices are designed to match their network surroundings. We also apply these methods to the proteome of the yeast Saccharomyces cerevisiae and find the results compatible with more specialized methods.  相似文献   

14.
研究机构创新设计中运动链同构判定问题. 依据图论和机构拓扑学原理,提出子块、平方和度、子块关联度等概念;利用拓扑图顶点间连接关系,构造顶点分类集合;提出一种将所有顶点一一划分并两两映射的算法,实现同构识别. 实验结果证明该算法比现有算法效率高. 将算法设计基础理论应用于机构拓扑学,为该领域研究提供新思路  相似文献   

15.
对于给定的平面简单多边形顶点序列,判别多边形方向和顶点凸凹性的传统方法为:先计算多边形相邻边向量的叉积或相邻3个顶点所确定三角形的有向面积,再由叉积或有向面积的符号来确定顶点的凸凹性,使得处理一个顶点需要2次以上的乘法运算。笔者通过边向量斜率的计算和比较,将多边形顶点的凸凹性与边向量的斜率联系起来,并采用“假设-检验”方法,提出了一种快速判别简单多边形方向与顶点凸凹性的新算法,其时间复杂度为)(nO,判别多边形任一顶点凸凹性所需的乘法运算平均不超过1次。该算法原理直观简单,实现容易。实际运行结果表明,该算法速度快捷、运行稳定。  相似文献   

16.
A simple algorithm for evaluating the k-out-of-n network reliability   总被引:1,自引:6,他引:1  
Evaluating the network reliability is an important topic in the planning, designing, and control of systems. The minimal cut set (MC, an edge set) is one of the major and fundamental tools for evaluating network reliability. A k-out-of-n MC is a special MC in a k-out-of-n network in which some nodes must receive at least k flows from their n input edges, where k is an integer number between 1 and n. In this study, an alternative method is given first to define a MC using a node set (called MCN) in k-out-of-n networks. A very simple algorithm based on some intuitive theorems that characterize the structure of the MCN and the relationship between MC and MCN is developed to solve the k-out-of-n network reliability by finding the k-out-of-n MCs between two special nodes. The proposed algorithm is not only easier to understand and implement, but is also better than the existing algorithm. The correctness of the proposed algorithm will be analyzed and proven. Two examples are illustrated to show how all k-out-of-n MCs are generated and verified in a k-out-of-n network using the proposed algorithm. The reliability of one example is then computing using one example.  相似文献   

17.
We describe an algorithm which generates tetrahedral decomposition of a general solid body, whose surface is given as a collection of triangular facets. The principal idea is to modify the constraints in such a way as to make them appear in an unconstrained triangulation of the vertex set à priori. The vertex set positions are randomized to guarantee existence of a unique triangulation which satisfies the Delaunay empty‐sphere property. (Algorithms for robust, parallelized construction of such triangulations are available.) In order to make the boundary of the solid appear as a collection of tetrahedral faces, we iterate two operations, edge flip and edge split with the insertion of additional vertex, until all of the boundary facets are present in the tetrahedral mesh. The outcome of the vertex insertion is another triangulation of the input surfaces, but one which is represented as a subset of the tetrahedral faces. To determine if a constraining facet is present in the unconstrained Delaunay triangulation of the current vertex set, we use the results of Rajan which re‐formulate Delaunay triangulation as a linear programming problem. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

18.
A multistate-node acyclic network (MNAN) is a generalization of the tree-structured multistate-node system that does not satisfy the flow conservation law. The current known existing methods used to evaluate MNAN reliability are based on the minimal tree (MT) set. Instead of using the MT, an intuitive algorithm was developed in this to find the minimal cut (MC) set. The MNAN reliability can then be computed in terms of MCs. The proposed algorithm is simpler and more efficient compared to the best-known existing methods. The computational complexity of the proposed algorithm is analyzed and compared with the best-known existing methods. One example is used to show how all MCs are generated using the proposed algorithm. The corresponding reliabilities in this example are computed.  相似文献   

19.
A fast BDD algorithm for large coherent fault trees analysis   总被引:9,自引:2,他引:9  
Although a binary decision diagram (BDD) algorithm has been tried to solve large fault trees until quite recently, they are not efficiently solved in a short time since the size of a BDD structure exponentially increases according to the number of variables. Furthermore, the truncation of If–Then–Else (ITE) connectives by the probability or size limit and the subsuming to delete subsets could not be directly applied to the intermediate BDD structure under construction. This is the motivation for this work.This paper presents an efficient BDD algorithm for large coherent systems (coherent BDD algorithm) by which the truncation and subsuming could be performed in the progress of the construction of the BDD structure. A set of new formulae developed in this study for AND or OR operation between two ITE connectives of a coherent system makes it possible to delete subsets and truncate ITE connectives with a probability or size limit in the intermediate BDD structure under construction. By means of the truncation and subsuming in every step of the calculation, large fault trees for coherent systems (coherent fault trees) are efficiently solved in a short time using less memory. Furthermore, the coherent BDD algorithm from the aspect of the size of a BDD structure is much less sensitive to variable ordering than the conventional BDD algorithm.  相似文献   

20.
Equality constraints have been well studied and widely used in deterministic optimization, but they have rarely been addressed in reliability‐based design optimization (RBDO). The inclusion of an equality constraint in RBDO results in dependency among random variables. Theoretically, one random variable can be substituted in terms of remaining random variables given an equality constraint; and the equality constraint can then be eliminated. However, in practice, eliminating an equality constraint may be difficult or impossible because of complexities such as coupling, recursion, high dimensionality, non‐linearity, implicit formats, and high computational costs. The objective of this work is to develop a methodology to model equality constraints and a numerical procedure to solve a RBDO problem with equality constraints. Equality constraints are classified into demand‐based type and physics‐based type. A sequential optimization and reliability analysis strategy is used to solve RBDO with physics‐based equality constraints. The first‐order reliability method is employed for reliability analysis. The proposed method is illustrated by a mathematical example and a two‐member frame design problem. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号