首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10711篇
  免费   1448篇
  国内免费   1451篇
工业技术   13610篇
  2024年   47篇
  2023年   186篇
  2022年   301篇
  2021年   321篇
  2020年   434篇
  2019年   363篇
  2018年   361篇
  2017年   442篇
  2016年   502篇
  2015年   561篇
  2014年   767篇
  2013年   1218篇
  2012年   863篇
  2011年   816篇
  2010年   629篇
  2009年   760篇
  2008年   789篇
  2007年   787篇
  2006年   662篇
  2005年   590篇
  2004年   471篇
  2003年   352篇
  2002年   286篇
  2001年   214篇
  2000年   160篇
  1999年   151篇
  1998年   110篇
  1997年   89篇
  1996年   63篇
  1995年   64篇
  1994年   52篇
  1993年   34篇
  1992年   37篇
  1991年   28篇
  1990年   16篇
  1989年   16篇
  1988年   6篇
  1987年   2篇
  1986年   11篇
  1985年   10篇
  1984年   12篇
  1983年   6篇
  1981年   3篇
  1980年   4篇
  1979年   2篇
  1978年   2篇
  1976年   3篇
  1975年   3篇
  1972年   1篇
  1971年   1篇
排序方式: 共有10000条查询结果,搜索用时 562 毫秒
991.
Permutation flow shop scheduling (PFSP) is among the most studied scheduling settings. In this paper, a hybrid Teaching–Learning-Based Optimization algorithm (HTLBO), which combines a novel teaching–learning-based optimization algorithm for solution evolution and a variable neighborhood search (VNS) for fast solution improvement, is proposed for PFSP to determine the job sequence with minimization of makespan criterion and minimization of maximum lateness criterion, respectively. To convert the individual to the job permutation, a largest order value (LOV) rule is utilized. Furthermore, a simulated annealing (SA) is adopted as the local search method of VNS after the shaking procedure. Experimental comparisons over public PFSP test instances with other competitive algorithms show the effectiveness of the proposed algorithm. For the DMU problems, 19 new upper bounds are obtained for the instances with makespan criterion and 88 new upper bounds are obtained for the instances with maximum lateness criterion.  相似文献   
992.
This paper presents a capacity planning system (CPS) to generate a feasible production schedule, improve production efficiency, and avoid overcapacity for the packaging industry. CPS applies the concept of workload leveling and finite capacity planning to assign orders to production lines by considering several production characteristics such as drying time, quantity splitting owing to the cutting pattern of the product type, and the variability of machine capacity threshold. CPS consists of five modules, namely, order treatment module (OTM), order priority module (OPM), lot release module (LRM), workload accumulation module (WAM), and workload balance module (WBM). The experimental design is used to evaluate the effectiveness and efficiency of the proposed CPS with five factors (number of orders, order size, order size variance, order priority, and balance policy) with various levels and three response variables, namely, machine workload balance, order due date deviation, and lateness. Moreover, this result extends into finding the best settings of order priority and balance policy to generate the best favorable responses under the given three environment factors.  相似文献   
993.
In condition-based maintenance, a common practice is to record a condition reading at a regular interval, and once the reading is higher than a pre-set critical level, the item monitored is declared faulty and repair or replacement may be initiated. However, surprisingly both in practice and theory, little attention has been paid to whether or not the critical level and the monitoring interval are set in a cost effective way. This paper reports on the development of a model that can be used to determine the optimal critical level and interval in condition-based maintenance in terms of a criterion of interest. The model is established on the basis of the random coefficient growth model where the coefficients of the regression growth model are assumed to follow known distribution functions. A simple example is given in the paper to illustrate the modelling ideas.  相似文献   
994.
General-purpose graphics processing unit (GPGPU) plays an important role in massive parallel computing nowadays. A GPGPU core typically holds thousands of threads, where hardware threads are organized into warps. With the single instruction multiple thread (SIMT) pipeline, GPGPU can achieve high performance. But threads taking different branches in the same warp violate SIMD style and cause branch divergence. To support this, a hardware stack is used to sequentially execute all branches. Hence branch divergence leads to performance degradation. This article represents the PDOM (post dominator) stack as a binary tree, and each leaf corresponds to a branch target. We propose a new PDOM stack called PDOM-ASI, which can schedule all the tree leaves. The new stack can hide more long operation latencies with more schedulable warps without the problem of warp over-subdivision. Besides, a multi-level warp scheduling policy is proposed, which lets part of the warps run ahead and creates more opportunities to hide the latencies. The simulation results show that our policies achieve 10.5% performance improvements over baseline policies with only 1.33% hardware area overhead.  相似文献   
995.
Wafers are produced in an environment with uncertain demand and failure-prone machines. Production planners have to react to changes of both machine availability and target output, and revise plans appropriately. The scientific community mostly proposes WIP-oriented mid-term production planning to solve this problem. In such approaches, production is planned by defining targets for throughput rates and buffer levels of selected operations. In industrial practice, however, cycle time-oriented planning is often preferred over WIP-oriented planning. We therefore propose a new linear programming formulation, which facilitates cycle time-oriented mid-term production planning in wafer fabrication. This approach plans production by defining release quantities and target cycle times up to selected operations. It allows a seamless integration with the subordinate scheduling level. Here, least slack first scheduling translates target cycle times into lot priorities. We evaluate our new methodology in a comprehensive simulation study. The results suggest that cycle time-oriented mid-term production planning can both increase service level and reduce cycle time compared to WIP-oriented planning. Further, it requires less modelling effort and generates plans, which are easier to comprehend by human planners.  相似文献   
996.
How to reduce power consumption of data centers has received worldwide attention. By combining the energy-aware data placement policy and locality-aware multi-job scheduling scheme, we propose a new multi-objective bi-level programming model based on MapReduce to improve the energy efficiency of servers. First, the variation of energy consumption with the performance of servers is taken into account; second, data locality can be adjusted dynamically according to current network state; last but not least, considering that task-scheduling strategies depend directly on data placement policies, we formulate the problem as an integer bi-level programming model. In order to solve the model efficiently, specific-design encoding and decoding methods are introduced. Based on these, a new effective multi-objective genetic algorithm based on MOEA/D is proposed. As there are usually tens of thousands of tasks to be scheduled in the cloud, this is a large-scale optimization problem and a local search operator is designed to accelerate convergent speed of the proposed algorithm. Finally, numerical experiments indicate the effectiveness of the proposed model and algorithm.  相似文献   
997.
In this paper, we investigate a single-machine scheduling problem with periodic maintenance, which is motivated by various industrial applications (e.g. tool changes). The pursued objective is to minimise the number of tardy jobs, because it is one of the important criteria for the manufacturers to avoid the loss of customers. The strong NP-hardness of the problem is shown. To improve the state-of-the-art exact algorithm, we devise a new branch-and-bound algorithm based on an efficient lower bounding procedure and several new dominance properties. Numerical experiments are conducted to demonstrate the efficiency of our exact algorithm.  相似文献   
998.
Secondary risk in project risk management refers to the risk that arises as a direct result of implementing a risk response action (RRA). It is important for project managers (PMs) to consider the effects caused by the secondary risks in the process of RRA selection. The purpose of this paper is to propose an optimization method to address the problem of selecting risk response actions (RRAs) with consideration of secondary risk which is seldom considered in the existing studies. The optimization model aims to minimize the total risk costs with time constraint being placed on the project makespan. By solving the model, an optimal set of RRAs along with the earliest start time for each activity can both be obtained. The results show that secondary risk plays an important role in the process of RRA selection. Project managers should allocate more budget for responding the project risk when the secondary risk is considered, and consider all factors relating to both time and cost so as to select appropriate RRAs to mitigate primary risk and secondary risk.  相似文献   
999.
The solution of the classical batch scheduling problem with identical jobs and setup times to minimize flowtime is known for twenty five years. In this paper we extend this result to a setting of two uniform machines with machine-dependent setup times. We introduce an O(n) solution for the relaxed version (allowing non-integer batch sizes), followed by a simple rounding procedure to obtain integer batch sizes.  相似文献   
1000.
In this article, a distributed simulation tool for the feasibility evaluation of multi-site scheduling is proposed. The application areas concern supply chains (SCs) or networks of distributed workshops. The distributed simulation of workshops, called virtual workshops, generates various problems of causality and of tasks execution coordination. These problems are addressed in the proposed distributed architecture by the use of High Level Architecture protocol guaranteeing the synchronisation and the chronology of events occurring in the distributed simulations. An application to a simple case of the SC organising the flow between three workshops shows the effectiveness of the distributed simulation tool.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号