首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 16 毫秒
1.
In the literature of multi-objective problem, there are different algorithms to solve different optimization problems. This paper presents a min–max multi-objective procedure for a dual-objective, namely make span, and sum of the earliness and tardiness of jobs in due window machine scheduling problems, simultaneously. In formulation of min–max method when this method is combined with the weighting method, the decision maker can have the flexibility of mixed use of weights and distance parameter to yield a set of Pareto-efficient solutions. This research extends the new hybrid metaheuristic (HMH) to solve parallel machines scheduling problems with sequence-dependent setup time that comprises three components: an initial population generation method based on an ant colony optimization (ACO), a simulated annealing (SA) as an evolutionary algorithm employs certain probability to avoid becoming trapped in a local optimum, and a variable neighborhood search (VNS) which involves three local search procedures to improve the population. In addition, two VNS-based HMHs, which are a combination of two methods, SA/VNS and ACO/VNS, are also proposed to solve the addressed scheduling problems. A design of experiments approach is employed to calibrate the parameters. The non-dominated sets obtained from HMH and two best existing bi-criteria scheduling algorithms are compared in terms of various indices and the computational results show that the proposed algorithm is capable of producing a number of high-quality Pareto optimal scheduling plans. Aside, an extensive computational experience is carried out to analyze the different parameters of the algorithm.  相似文献   

2.
In this paper the one-machine scheduling problem with linear earliness and tardiness costs is considered. The cost functions are job dependent and asymmetric. The problem consists of two sub-problems. The first one is to find a sequence of jobs and the second one is to find the job completion times that are optimal for the given sequence. We consider the second sub-problem and propose an algorithm solving the problem in O(nlogn)O(nlogn) time.  相似文献   

3.
We study scheduling problems with two competing agents, sharing the same machines. All the jobs of both agents have identical processing times and a common due date. Each agent needs to process a set of jobs, and has his own objective function. The objective of the first agent is total weighted earliness–tardiness, whereas the objective of the second agent is maximum weighted deviation from the common due date. Our goal is to minimize the objective of the first agent, subject to an upper bound on the objective value of the second agent. We consider a single machine, and parallel (both identical and uniform) machine settings. An optimal solution in all cases is shown to be obtained in polynomial time by solving a number of linear assignment problems. We show that the running times of the single and the parallel identical machine algorithms are O(nm+3), where n is the number of jobs and m is the number of machines. The algorithm for solving the problem on parallel uniform machine requires O(nm+3m3) time, and under very reasonable assumptions on the machine speeds, is reduced to O(nm+3). Since the number of machines is given, these running times are polynomial in the number of jobs.  相似文献   

4.
In this paper, we have considered a class of single machine job scheduling problems where the objective is to minimize the weighted sum of earliness–tardiness penalties of jobs. The weights are job-independent but they depend on whether a job is early or tardy. The restricted version of the problem where the common due date is smaller than a critical value, is known to be NP-complete. While dynamic programming formulation runs out of memory for large problem instances, depth-first branch-and-bound formulation runs slow for large problems since it uses a tree search space. In this paper, we have suggested an algorithm to optimally solve large instances of the restricted version of the problem. The algorithm uses a graph search space. Unlike dynamic programming, the algorithm can output optimal solutions even when available memory is limited. It has been found to run faster than dynamic programming and depth-first branch-and-bound formulations and can solve much larger instances of the problem in reasonable time. New upper and lower bounds have been proposed and used. Experimental findings are given in detail.Scope and purposeA class of single machine problems arising out of scheduling jobs in JIT environment has been considered in this paper. The objective is to minimize the total weighted earliness–tardiness penalties of jobs. In this paper, we have presented a new algorithm and conducted extensive empirical runs to show that the new algorithm performs much better than the existing approaches in solving large instances of the problem.  相似文献   

5.
In this note, we point out two major errors in the paper “Minimizing total tardiness on parallel machines with preemptions” by Kravchenko and Werner (2012). More precisely, they claimed to have proved that both problems P|pmtn|∑T j and P|r j ,p j =p,pmtn|∑T j are $\mathcal{NP}$ -Hard. We give a counter-example to their proofs, letting the complexity of these two problems open.  相似文献   

6.
In this paper we study the problem of scheduling n jobs with release dates, due dates, weights, and equal processing times on a single machine. The objective is to minimize total weighted tardiness. We formulate the problem as a time-indexed ILP after which we solve the LP-relaxation. We show that for certain special cases (namely when either all due dates, all weights, or all release dates are equal, or when all due dates and release dates are equally ordered), the solution for the LP-relaxation is either integral or can be adjusted in polynomial time into an integral one. For the general case we present a branching rule that performs well. Furthermore we show that the same approach holds for the m identical, parallel machines variant of the problem. Finally we show that with a minor modification the same approach also holds for the single-machine problems of minimizing the sum of weighted late jobs (1|r j ,p j =p|∑w j U j ) and the sum of weighted late work (1|r j ,p j =p|∑w j V j ) as well as their respective variants with m identical, parallel machines. We further show how we can solve these problems by applying column generation when there is not sufficient memory available to apply the direct ILP-approach.  相似文献   

7.
This paper addresses a new model for the one-machine earliness–tardiness scheduling problem where jobs can be interrupted. Some dominance rules and a lower bound are derived. A new timing algorithm is also presented and a local search algorithm based on this timing algorithm permits the computation of good feasible solutions. We experimentally compare our timing algorithm with a previously published timing algorithm. The tests show that the execution time of the new timing algorithm is significantly faster, especially for large instances. The values of the solutions are compared to the lower bound.  相似文献   

8.
This paper addresses the one machine scheduling problem in which n jobs have distinct due dates with earliness and tardiness costs. Fast neighborhoods are proposed for the problem. They are based on a block representation of the schedule. A timing operator is presented as well as swap and extract-and-reinsert neighborhoods. They are used in an iterated local search framework. Two types of perturbations are developed based, respectively, on random swaps and earliness and tardiness costs. Computational results show that very good solutions for instances with significantly more than 100 jobs can be derived in a few seconds.  相似文献   

9.
In this paper, we consider the resource-constrained project scheduling problem with a due date for each activity. The objective is to minimize the net present value of the earliness–tardiness penalty costs. The problem is first mathematically modeled. Then, two meta-heuristics, genetic algorithm and simulated annealing are proposed to solve this strongly NP-hard problem. Design of experiments and response surface methodology are employed to fine-tune the meta-heuristics’ parameters. Finally, a comprehensive computational experiment is described, performed on a set of instances and the results are analyzed and discussed.  相似文献   

10.
The paper addresses the problem of multi-slot just-in-time scheduling. Unlike the existing literature on this subject, it studies a more general criterion—the minimization of the schedule makespan rather than the minimization of the number of slots used by schedule. It gives an O(nlog 2 n)-time optimization algorithm for the single machine problem. For arbitrary number of m>1 identical parallel machines it presents an O(nlog n)-time optimization algorithm for the case when the processing time of each job does not exceed its due date. For the general case on m>1 machines, it proposes a polynomial time constant factor approximation algorithm.  相似文献   

11.
We consider the problem of scheduling two jobs A and B on a set of m uniform parallel machines. Each job is assumed to be independent from the other: job A and job B are made up of n A and n B operations, respectively. Each operation is defined by its processing time and possibly additional data such as a due date, a weight, etc., and must be processed on a single machine. All machines are uniform, i.e. each machine has its own processing speed. Notice that we consider the special case of equal-size operations, i.e. all operations have the same processing time. The scheduling of operations of job A must be achieved to minimize a general cost function F A , whereas it is the makespan that must be minimized when scheduling the operations of job B. These kind of problems are called multiple agent scheduling problems. As we are dealing with two conflicting criteria, we focus on the calculation of strict Pareto optima for F A and CmaxBC_{\mathrm{max}}^{B} criteria. In this paper we consider different min-max and min-sum versions of function F A and provide special properties as well as polynomial time algorithms.  相似文献   

12.
Even though shared-memory concurrency is a paradigm frequently used for developing parallel applications on small- and middle-sized machines, experience has shown that it is hard to use. This is largely caused by synchronization primitives which are low-level, inherently non-deterministic, and, consequently, non-intuitive to use. In this paper, we present the Nornir run-time system. Nornir is comparable to well-known frameworks such as MapReduce and Dryad that are recognized for their efficiency and simplicity. Unlike these frameworks, Nornir also supports process structures containing branches and cycles. Nornir is based on the formalism of Kahn process networks, which is a shared-nothing, message-passing model of concurrency. We deem this model a simple and deterministic alternative to shared-memory concurrency. Experiments with real and synthetic benchmarks on up to 8 CPUs show that performance in most cases scales almost linearly with the number of CPUs, when not limited by data dependencies. We also show that the modeling flexibility allows Nornir to outperform its MapReduce counterparts using well-known benchmarks.  相似文献   

13.
Wang–Landau sampling is implemented on the Graphics Processing Unit (GPU) with the Compute Unified Device Architecture (CUDA). Performances on three different GPU cards, including the new generation Fermi architecture card, are compared with that on a Central Processing Unit (CPU). The parameters for massively parallel Wang–Landau sampling are tuned in order to achieve fast convergence. For simulations of the water cluster systems, we obtain an average of over 50 times speedup for a given workload.  相似文献   

14.
This paper reveals the relationship between the weighted Moore–Penrose generalized inverse and the force analysis of overconstrained parallel mechanisms (PMs), including redundantly actuated PMs and passive overconstrained PMs. The solution for the optimal distribution of the driving forces/torques of redundantly actuated PMs is derived in the form of a weighted Moore–Penrose inverse. Therefore, different force distributions can be achieved simply by changing the value of the weighted factor matrix in terms of different optimization goals, and this approach greatly improves computational efficiency in solving such problems. In addition, the explicit expression is deduced between the weighted Moore–Penrose generalized inverse and the constraint wrenches solution of general passive overconstrained PMs (in which each supporting limb may supply single or multiple constraint wrenches). In this expression, the weighted factor matrix is composed of the stiffness matrices of each limb’s constraint wrenches. As numerical examples, the driving forces/torques or the constraint forces/couples for two kinds of overconstrained PMs are solved directly by the weighted Moore–Penrose generalized inverse. The verification results show the correctness of the relationship obtained in this paper between the weighted Moore–Penrose generalized inverse and the force analysis of overconstrained PMs. Using the Moore–Penrose generalized inverse to solve the driving forces/torques or constraint forces/couples of overconstrained PMs provides solutions of a unified, simple form and improves computational efficiency.  相似文献   

15.
Modern graphics processing units (GPUs) have been widely utilized in magnetohydrodynamic (MHD) simulations in recent years. Due to the limited memory of a single GPU, distributed multi-GPU systems are needed to be explored for large-scale MHD simulations. However, the data transfer between GPUs bottlenecks the efficiency of the simulations on such systems. In this paper we propose a novel GPU Direct–MPI hybrid approach to address this problem for overall performance enhancement. Our approach consists of two strategies: (1) We exploit GPU Direct 2.0 to speedup the data transfers between multiple GPUs in a single node and reduce the total number of message passing interface (MPI) communications; (2) We design Compute Unified Device Architecture (CUDA) kernels instead of using memory copy to speedup the fragmented data exchange in the three-dimensional (3D) decomposition. 3D decomposition is usually not preferable for distributed multi-GPU systems due to its low efficiency of the fragmented data exchange. Our approach has made a breakthrough to make 3D decomposition available on distributed multi-GPU systems. As a result, it can reduce the memory usage and computation time of each partition of the computational domain. Experiment results show twice the FLOPS comparing to common 2D decomposition MPI-only implementation method. The proposed approach has been developed in an efficient implementation for MHD simulations on distributed multi-GPU systems, called MGPU–MHD code. The code realizes the GPU parallelization of a total variation diminishing (TVD) algorithm for solving the multidimensional ideal MHD equations, extending our work from single GPU computation (Wong et al., 2011) to multiple GPUs. Numerical tests and performance measurements are conducted on the TSUBAME 2.0 supercomputer at the Tokyo Institute of Technology. Our code achieves 2 TFLOPS in double precision for the problem with 12003 grid points using 216 GPUs.  相似文献   

16.
In this article, the fuzzy concepts are applied in analysis of the system reliability problem. The fuzzy number is used to construct the fuzzy reliability of the non-repairable multi-state series–parallel system (NMSS). The fuzzy failure rate function is represented by an exponential fuzzy number. By using this innovative approach, the fuzzy system reliability of NMSS is created. In order to analyse this fuzzy system reliability, the fuzzy Bayesian point estimate of fuzzy system reliability is made by the conventional Bayesian formula. And, the posterior fuzzy system reliability of NMSS is developed by Bayesian inference with fuzzy probabilities. Finally, the performance of the method is measured by the mean square error of fuzzy Bayesian point estimate for the fuzzy system reliability of NMSS.  相似文献   

17.
A hybrid scheme that utilizes MPI for distributed memory parallelism and OpenMP for shared memory parallelism is presented. The work is motivated by the desire to achieve exceptionally high Reynolds numbers in pseudospectral computations of fluid turbulence on emerging petascale, high core-count, massively parallel processing systems. The hybrid implementation derives from and augments a well-tested scalable MPI-parallelized pseudospectral code. The hybrid paradigm leads to a new picture for the domain decomposition of the pseudospectral grids, which is helpful in understanding, among other things, the 3D transpose of the global data that is necessary for the parallel fast Fourier transforms that are the central component of the numerical discretizations. Details of the hybrid implementation are provided, and performance tests illustrate the utility of the method. It is shown that the hybrid scheme achieves good scalability up to ~20,000 compute cores with a maximum efficiency of 89%, and a mean of 79%. Data are presented that help guide the choice of the optimal number of MPI tasks and OpenMP threads in order to maximize code performance on two different platforms.  相似文献   

18.
This paper describes a massively parallel code for a state-of-the art thermal lattice–Boltzmann method. Our code has been carefully optimized for performance on one GPU and to have a good scaling behavior extending to a large number of GPUs. Versions of this code have been already used for large-scale studies of convective turbulence.GPUs are becoming increasingly popular in HPC applications, as they are able to deliver higher performance than traditional processors. Writing efficient programs for large clusters is not an easy task as codes must adapt to increasingly parallel architectures, and the overheads of node-to-node communications must be properly handled.We describe the structure of our code, discussing several key design choices that were guided by theoretical models of performance and experimental benchmarks. We present an extensive set of performance measurements and identify the corresponding main bottlenecks; finally we compare the results of our GPU code with those measured on other currently available high performance processors. Our results are a production-grade code able to deliver a sustained performance of several tens of Tflops as well as a design and optimization methodology that can be used for the development of other high performance applications for computational physics.  相似文献   

19.
Dimitrova  Rayna  Majumdar  Rupak 《Acta Informatica》2018,55(2):153-189
Acta Informatica - Extensions to finite-state automata on strings, such as multi-head automata or multi-counter automata, have been successfully used to encode many infinite-state non-regular...  相似文献   

20.
In the course of developing a microfluidic analytical platform incorporating the polymerase chain reaction (PCR) and subsequent capillary electrophoresis (CE) analysis for a variety of bio-assays, we examined PCR inhibition through surface interactions with the chip materials. Our devices perform PCR in a three-layer chip, a glass–poly(dimethylsiloxane)–glass sandwich in which the poly(dimethylsiloxane) (PDMS, a silicone rubber) layer is used for pneumatic membrane pumping and valving of the PCR reagents. Initial on-chip PCR–CE tests of BK virus replicated in multiple uncoated chips showed variable results, usually yielding no detectable product at the target sample concentrations used. Subsequent “chip-flush” experiments, where water or reagents were flushed through a chip and subsequently incorporated in off-chip PCR, highlighted bovine serum albumin (BSA) amongst other pre-treatments, chip materials and PCR recipes as being effective in mitigating inhibition. When the BSA channel pre-coating was applied to on-chip PCR–CE experiments, a substantial improvement (10× to 40×) in signal-to-noise (S/N) of the CE product peak was conferred, and was shown with high confidence despite high S/N variability. This is the first study to quantitatively examine BSA’s ability to reduce inhibition of PCR performed on PDMS chips, and one of very few microfluidic PCR inhibition studies of any kind to use a large number of microfluidic chips (~400). The simplicity and effectiveness of our BSA coating suggest that passivating materials applied to microfluidic device channel networks may provide a viable pathway for development of bio-compatible devices with reduced complexity and cost.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号