首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
运用参数计算复杂性理论和技术对带权最大割问题进行了研究。首先对该问题及其相关概念进行了参数化定义,然后对参数化带权最大割问题提出了一种基于随机划分技术的随机算法。该随机算法依次将实例图的顶点进行[1n(1/ε)]×2~k(0ε1)次随机划分,并选择其中权值最大的k-划分作为输出解,因而能在时间O~*(1n(1/ε)2~k)内以至少1-ε的概率找到目标解。接着在此基础上着重运用最新改进的(n,k)-全集划分技术对参数化带权最大割问题提出了一个时间复杂度为O~*(2~(2k+12log~2(2k))的确定性算法,表明了带权最大割问题是固定参数可解的。  相似文献   

2.
The multi-mode resource-constrained project scheduling problem with minimum and maximum time lags MRCPSP/max is a very general project scheduling problem with multiple execution modes per activity, renewable and non-renewable resources and minimum and maximum time lags between activities. In this paper, we describe SA-EVA, an algorithm for the problem. SA-EVA first searches for the best mode for each activity, without considering renewable resources. In this phase a simulated annealing is applied. Once a mode vector has been chosen, the problem reduces to the RCPSP/max, which SA-EVA solves with EVA, an algorithm designed in Ballestín et al. [2009. An evolutionary algorithm for the resource-constrained project scheduling problem with minimum and maximum time-lags. Journal of Scheduling, 14 (4), online]. Computational results show that SA-EVA outperforms the state-of-the-art algorithms in medium and large instances.  相似文献   

3.
This paper describes a Monte-Carlo (MC) simulation methodology for estimating the reliability of a multi-state network. The problem under consideration involves multi-state two-terminal reliability (M2TR) computation. Previous approaches have relied on enumeration or on the computation of multi-state minimal cut vectors (MMCV) and the application of inclusion/exclusion formulae. This paper discusses issues related to the reliability calculation process based on MMCV. For large systems with even a relatively small number of component states, reliability computation can become prohibitive or inaccurate using current methods. The major focus of this paper is to present and compare a new MC simulation approach that obtains accurate approximations to the actual M2TR. The methodology uses MC to generate system state vectors. Once a vector is obtained, it is compared to the set of MMCV to determine whether the capacity of the vector satisfies the required demand. Examples are used to illustrate and validate the methodology. The estimates of the simulation approach are compared to exact and approximation procedures from solution quality and computational effort perspectives. Results obtained from the simulation approach show that for relatively large networks, the maximum absolute relative error between the simulation and the actual M2TR is less than 0.9%, yet when considering approximation formulae, this error can be as large as 18.97%. Finally, the paper discusses that the MC approach consistently yields accurate results while the accuracy of the bounding methodologies can be dependant on components that have considerable impact on the system design.  相似文献   

4.
Computer networks and power transmission networks are treated as capacitated flow networks. A capacitated flow network may partially fail due to maintenance. Therefore, the capacity of each edge should be optimally assigned to face critical situations—i.e., to keep the network functioning normally in the case of failure at one or more edges. The robust design problem (RDP) in a capacitated flow network is to search for the minimum capacity assignment of each edge such that the network still survived even under the edge’s failure. The RDP is known as NP-hard. Thus, capacity assignment problem subject to system reliability and total capacity constraints is studied in this paper. The problem is formulated mathematically, and a genetic algorithm is proposed to determine the optimal solution. The optimal solution found by the proposed algorithm is characterized by maximum reliability and minimum total capacity. Some numerical examples are presented to illustrate the efficiency of the proposed approach.  相似文献   

5.
A MP/minimal cutset (MC) is a path/cut set such that if any edge is removed from this path/cut set, then the remaining set is no longer a path/cut set. An intuitive method is proposed to evaluate the reliability in terms of MCs in a stochastic-flow network subject to both edge and node failures under the condition that all of the MCs are given in advance. This is an extension of the best of known algorithms for solving the d-MC (a special MC but formatted in a system-state vector, where d is the lower bound points of the system capacity level) problem from the stochastic-flow network without unreliable nodes to with unreliable nodes by introducing some simple concepts. These concepts were first developed in the literature to implement the proposed algorithm to reduce the number of d-MC candidates. This method is more efficient than the best of known existing algorithms regardless if the network has or does not have unreliable nodes. Two examples are illustrated to show how the reliability is determined using the proposed algorithm in the network with or without unreliable nodes. The computational complexity of the proposed algorithm is analyzed and compared with the existing methods.  相似文献   

6.
The main focus of this study was to develop an inverse model that could be used to determine the changes in sectional area-moment-of-inertia of a helicopter rotor blade from the knowledge of a shifting point-load and the end-slope data. In this investigation, the cross-sectional area-moment-of-inertias of a rotor blade model with n segments are reconstructed using a shifting point-load and the corresponding end-slope data. The end-slope data used in this inverse problem were numerically generated using the finite element method. The end-slope data and the loading were then used in the inverse problem to reconstruct the cross-sectional area-moment-of-inertias for the model. To solve the inverse problem, the solution domain was discretized into finite number of segments, and a shifting point-load was applied to the mid-point of each segment. The beam equation was then integrated to create a set of linear equations in terms of loading and end-slopes. Next, the resulting set of equations was solved simultaneously to recover the area-moment-of-inertia for each segment. A series of numerical experiments were performed to check the validity and sensitivity of the inverse model. Comparison of the inverse solutions with the direct solutions confirms that the variations in the area-moment-of-inertia for a helicopter rotor blade can be reconstructed with good accuracy from the knowledge of the shifting point-load and end-slopes.  相似文献   

7.
An inverse problem in practical scientific investigations is the process of computing unknown parameters from a set of observations where the observations are only recorded indirectly, such as monitoring and controlling quality in industrial process control. Linear regression can be thought of as linear inverse problems. In other words, the procedure of unknown estimation parameters can be expressed as an inverse problem. However, maximum likelihood provides an unstable solution, and the problem becomes more complicated if unknown parameters are estimated from different samples. Hence, researchers search for better estimates. We study two joint censoring schemes for lifetime products in industrial process monitoring. In practice, this type of data can be collected in fields such as the medical industry and industrial engineering. In this study, statistical inference for the Chen lifetime products is considered and analyzed to estimate underlying parameters. Maximum likelihood and Bayes’ rule are both studied for model parameters. The asymptotic distribution of maximum likelihood estimators and the empirical distributions obtained with Markov chain Monte Carlo algorithms are utilized to build the interval estimators. Theoretical results using tables and figures are adopted through simulation studies and verified in an analysis of the lifetime data. We briefly describe the performance of developed methods.  相似文献   

8.
We consider the torsional deformation of a non-homogeneous infinite elastic cylinder slackened by an external circular cut. The shear modulus of the material of the cylinder is assumed to vary with the radial coordinate by a power law. It is assumed that the lateral surface of the cylinder as well as the surface of the cut are free of stress. The main object of this study is to establish the effect of the non-homogeneity on the stress intensity factor at the tip of the cut. The problem leads to a pair of dual series relations, the solution of which is governed by a Fredholm integral equation of the second kind with a symmetric kernel. This equation is solved numerically by reducing it to an algebraic system. It is concluded that for any degree of non-homogeneity and for D, the relative depth of the cut, greater than 0.6, the cylinder may be replaced by a half-space. However, as the non-homogeneity increases, D decreases.  相似文献   

9.
A procedure to design symmetrically laminated plates under buckling loads for minimum weight with manufacturing uncertainty (tolerance) in the ply angle and plate thickness, which are the design variables, is described. A minimum buckling load capacity is the design constraint implemented. It is assumed that the probability of any tolerance value occurring within the tolerance band, compared with any other, is equal, and thus the approach is a worst case scenario approach. The effects of bending–twisting coupling are neglected in implementing the procedure, and the Downhill Simplex method is used as the search technique, but the methodology is flexible and allows any appropriate problem formulation and search algorithm to be substituted. Two different tolerance scenarios are used for the purposes of illustrating the methodology, and plates with varying aspect ratios and loading ratios are optimally designed and compared. The results demonstrate the importance of carrying out design optimisation of composite structures with the effects of manufacturing tolerances included.  相似文献   

10.
This paper proposes a performance index to measure the quality level of a stochastic-flow network in which each node has a designated capacity, which will have different lower levels due to various partial and complete failures. The performance index is the probability that the maximum flow of the network equals the demand d without exceeding the budget b. A simple algorithm in terms of minimal cuts is first proposed to generate all upper boundary points for (d, b), and then the probability that the maximum flow is less than or equal to d can be calculated in terms of such points. The upper boundary point for (d, b) is a maximal vector representing the capacity of each arc such that the maximum flow of the network under the budget b is d. The performance index can be calculated by repeating the proposed algorithm to obtain all upper boundary point for (d−1, b). A benchmark example is shown to illustrate the solution procedure.  相似文献   

11.
The goal in inverse electrocardiography (ECG) is to reconstruct cardiac electrical sources from body surface measurements and a mathematical model of torso–heart geometry that relates the sources to the measurements. This problem is ill-posed due to attenuation and smoothing that occur inside the thorax, and small errors in the measurements yield large reconstruction errors. To overcome this, ill-posedness, traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition and statistical approaches such as Bayesian Maximum A Posteriori estimation and Kalman filter have been applied. Statistical methods have yielded accurate inverse solutions; however, they require knowledge of a good a priori probability density function, or state transition definition. Minimum relative entropy (MRE) is an approach for inferring probability density function from a set of constraints and prior information, and may be an alternative to those statistical methods since it operates with more simple prior information definitions. However, success of the MRE method also depends on good choice of prior parameters in the form of upper and lower bound values, expected uncertainty in the model and the prior mean. In this paper, we explore the effects of each of these parameters on the solution of inverse ECG problem and discuss the limitations of the method. Our results show that the prior expected value is the most influential of the three MRE parameters.  相似文献   

12.
This paper deals with a stochastic-flow network in which each node and arc has a designated capacity, which will have different lower levels due to various partial and complete failures. We try to evaluate the system reliability that the maximum flow of the network is not less than a demand (d+1). A simple algorithm in terms of minimal cuts is first proposed to generate all upper boundary points for d, and then the system reliability can be calculated in terms of such points. The upper boundary point for d is a maximal vector, which represents the capacity of each component (arc or node), such that the maximum flow of the network is d. A computer example is shown to illustrate the solution procedure.  相似文献   

13.
本文对坡流挤压涂布工艺中几个普遍问题如多层挤压涂布所必须的条件,挤压涂布最高车速可有多高?最薄可涂多薄?最厚可涂多厚?挤压涂布中常见弊病等普遍关注的问题方面进行的研究进展作简要的介绍。  相似文献   

14.
In this article a method for including a priori preferences of decision makers into multicriteria optimization problems is presented. A set of Pareto-optimal solutions is determined via desirability functions of the objectives which reveal experts’ preferences regarding different objective regions. An application to noisy objective functions is not straightforward but very relevant for practical applications. Two approaches are introduced in order to handle the respective uncertainties by means of the proposed preference-based Pareto optimization. By applying the methods to the original and uncertain Binh problem and a noisy single cut turning cost optimization problem, these approaches prove to be very effective in focusing on different parts of the Pareto front of the ori-ginal problem in both certain and noisy environments.  相似文献   

15.
一种改进的k-means文档聚类初值选择算法   总被引:9,自引:0,他引:9  
提出了一种改进的基于最小最大原则的k-means文档聚类初始值选择算法.该方法首先构造相似度矩阵,然后利用最小最大原则对相似度矩阵进行分析,从而选择初始聚点并自动确定聚类k值.实验结果表明利用该方法找到的k值比较接近真实值.  相似文献   

16.
We study the best OSPF style routing problem in telecommunication networks, where weight management is employed to get a routing configuration with the minimum oblivious ratio. We consider polyhedral demand uncertainty: the set of traffic matrices is a polyhedron defined by a set of linear constraints, and a routing is sought with a fair performance for any feasible traffic matrix in the polyhedron. The problem accurately reflects real world networks, where demands can only be estimated, and models one of the main traffic forwarding technologies, Open Shortest Path First (OSPF) routing with equal load sharing. This is an NP-hard problem as it generalizes the problem with a fixed demand matrix, which is also NP-hard.  相似文献   

17.
This study concerns the development of a numerical methodology for initializing immersed interface‐based CFD solvers for using complex computer‐aided design (CAD) geometry. CFD solvers with higher‐order discretization stencils require larger stencil widths, which become problematic in regions of space where insufficient mesh resolution is available. This problem becomes especially challenging when convoluted triangulated surface meshes generated from complex solid models are used to initialize the cut‐cells. A pragmatic balance between desired local geometry resolution and numerical accuracy is often required to find a practical solution. Here, a robust iterative fill algorithm is presented that balances geometry resolution with numerical accuracy (via stencil size). Several examples are presented to illustrate the use of this initialization procedure that employs both the original CAD generated triangulated surface mesh, along with a level set representation of the surface to initialize cut‐cells and boundary proximity measures for creation of CFD stencils. Convergence error analysis of surface area and enclosed volumes is first presented to show the effects of fill on the geometry as a function of desired stencil size and grid resolution. The algorithm is then applied to geometrically complex problems using large eddy simulation. Two problems are considered. The first is flow around the Eiffel Tower. The second is a combustion swirler in the context of a design problem. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.
廖晔  王顺意 《工业工程》2020,23(5):96-102
基于图论网络最大流理论基础,建立了一种改进的网络最大流模型。首先,根据最基本的网络最大流模型,采用Ford-Fulkerson算法求解出理论最大通行能力为46人/s;其次,考虑通行的道路选择性,建立最短路模型,利用Dijkstra算法计算各个单源到各个单汇的最短路径,并通过A*算法排除与最短距离相差较大的路径,从而筛选出有效路径;然后,利用最短路模型结果加强原模型中的约束条件,利用单纯形法求解出实际最大通行能力为23人/s;最后,建立以道路扩宽成本最低为目标函数的线性规划模型对道路进行优化改造。研究结果表明,现有道路设计能够满足道路通行需求,若需提高道路通行能力且要求道路改造最小,可以适当扩宽路网中的关键道路。  相似文献   

19.
运输问题一般采用表上作业法来解决,考虑一类带配送中心的运输问题,若仍采用表上作业法,会使问题复杂化,文中采用一种构造辅助网络的方法:在运输网络中将每个配送中心均拆分成2个点,连接两点形成新弧,构造出新的网络,并给每条弧赋予参数,将此类运输问题转换为最小费用流模型来解决,可以使问题模型和运算简单化.在此基础上,考虑运输网络中配送中心和边的容量扩张问题.  相似文献   

20.
The computation of the reliability of two-terminal networks is a classical reliability problem. For these types of problems, one is interested, from a general perspective, in obtaining the probability that two specific nodes can communicate. This paper presents a holistic algorithm for the analysis of general networks that follow a two-terminal rationale. The algorithm is based on a set replacement approach and an element inheritance strategy that effectively obtains the minimal cut sets associated with a given network. The vast majority of methods available for obtaining two-terminal reliability are generally based on assumptions about the performance of the network. Some methods assume network components can be in one of two states: (i) either completely failed; or (ii) perfectly functioning, others usually assume that nodes are perfectly reliable and thus, these methods have to be complemented or transformed to account for node failure, and the remaining methods assume minimal cut sets can be readily computed in order to analyze more complex network and component behavior. The algorithm presented in this paper significantly differs from previous approaches available in the literature in the sense that it is based on a predecessor matrix and an element substitution technique that allows for the exact computation of minimal cut sets and the immediate inclusion of node failure without any changes to the pseudo-code. Several case networks are used to validate and illustrate the algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号