首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1613篇
  免费   73篇
  国内免费   2篇
工业技术   1688篇
  2024年   4篇
  2023年   36篇
  2022年   52篇
  2021年   58篇
  2020年   41篇
  2019年   62篇
  2018年   80篇
  2017年   57篇
  2016年   64篇
  2015年   47篇
  2014年   67篇
  2013年   124篇
  2012年   64篇
  2011年   107篇
  2010年   69篇
  2009年   86篇
  2008年   64篇
  2007年   73篇
  2006年   52篇
  2005年   43篇
  2004年   33篇
  2003年   37篇
  2002年   30篇
  2001年   19篇
  2000年   19篇
  1999年   18篇
  1998年   23篇
  1997年   32篇
  1996年   20篇
  1995年   22篇
  1994年   18篇
  1993年   8篇
  1992年   21篇
  1991年   16篇
  1990年   15篇
  1989年   17篇
  1988年   17篇
  1987年   8篇
  1986年   8篇
  1985年   6篇
  1984年   8篇
  1983年   5篇
  1982年   8篇
  1981年   4篇
  1980年   5篇
  1979年   6篇
  1977年   4篇
  1976年   4篇
  1972年   3篇
  1965年   1篇
排序方式: 共有1688条查询结果,搜索用时 31 毫秒
51.
Determining order relationship between events of a distributed computation is a fundamental problem in distributed systems which has applications in many areas including debugging, visualization, checkpointing and recovery. Fidge/Mattern’s vector-clock mechanism captures the order relationship using a vector of size N in a system consisting of N processes. As a result, it incurs message and space overhead of N integers. Many distributed applications use synchronous messages for communication. It is therefore natural to ask whether it is possible to reduce the timestamping overhead for such applications. In this paper, we present a new approach for timestamping messages and events of a synchronously ordered computation, that is, when processes communicate using synchronous messages. Our approach depends on decomposing edges in the communication topology into mutually disjoint edge groups such that each edge group either forms a star or a triangle. We show that, to accurately capture the order relationship between synchronous messages, it is sufficient to use one component per edge group in the vector instead of one component per process. Timestamps for events are only slightly bigger than timestamps for messages. Many common communication topologies such as ring, grid and hypercube can be decomposed into edge groups, resulting in almost 50% improvement in both space and communication overheads. We prove that the problem of computing an optimal edge decomposition of a communication topology is NP-complete in general. We also present a heuristic algorithm for computing an edge decomposition whose size is within a factor of two of the optimal. We prove that, in the worst case, it is not possible to timestamp messages of a synchronously ordered computation using a vector containing fewer than components when N ≥ 2. Finally, we show that messages in a synchronously ordered computation can always be timestamped in an offline manner using a vector of size at most . An earlier version of this paper appeared in 2002 Proceedings of the IEEE International Conference on Distributed Computing Systems (ICDCS). The author V. K. Garg was supported in part by the NSF Grants ECS-9907213, CCR-9988225, an Engineering Foundation Fellowship. This work was done while the author C. Skawratananond was a Ph.D. student at the University of Texas at Austin.  相似文献   
52.
The Morse-Smale complex is an efficient representation of the gradient behavior of a scalar function, and critical points paired by the complex identify topological features and their importance. We present an algorithm that constructs the Morse-Smale complex in a series of sweeps through the data, identifying various components of the complex in a consistent manner. All components of the complex, both geometric and topological, are computed, providing a complete decomposition of the domain. Efficiency is maintained by representing the geometry of the complex in terms of point sets.  相似文献   
53.
A variety of P compounds can accumulate in soils as residues of fertilizer and may influence soil test versus plant yield relationships. This work evaluates specific chemical extractants for their capacity to identify such Al, Fe and Ca phosphates in soils as a basis for increasing the precision of yield prediction. Aluminium phosphate, iron phosphate, calcium phosphate (apatite) and P sorbed onto gibbsite, goethite and calcite were added to four Western Australian lateritic soils. These soils were then subjected to sequential selective extraction using a modified Chang and Jackson procedure in order to evaluate the selectivity of these extractants for the different forms of P with the sequence of extraction: 1 M NH4Cl, 0.5 M NH4F, 0.1 M NaOH + 1 M NaCl, citrate-dithionite-bicarbonate (CDB), 1 M NaOH and 1 M HCl. The results show that the procedure is not sufficiently specific and thus might be of little value for estimating the forms and amounts of residues of phosphate rock fertilizers in soils.  相似文献   
54.
This study investigates sediment load prediction and generalization from laboratory scale to field scale using principle component analysis (PCA) in conjunction with data driven methods of artificial neural networks (ANNs) and genetic algorithms (GAs). Five main dimensionless parameters for total load are identified by using PCA. These parameters are used in the input vector of ANN for predicting total sediment loads. In addition, nonlinear equations are constructed, based upon the same identified dimensionless parameters. The optimal values of exponents and constants of the equations are obtained by the GA method. The performance of the so-developed ANN and GA based methods is evaluated using laboratory and field data. Results show that the expert methods (ANN and GA), calibrated with laboratory data, are capable of predicting total sediment load in field, thus showing their transferability. In addition, this study shows that the expert methods are not transferable for suspended load, perhaps due to insufficient laboratory data. Yet, these methods are able to predict suspended load in field, when trained with respective field data.  相似文献   
55.
Dynamic slicing is a promising trace based technique that helps programmers in the process of debugging. In order to debug a failed run, dynamic slicing requires the dynamic dependence graph (DDG) information for that particular run. The two major challenges involved in utilizing dynamic slicing as a debugging technique are the efficient computation of the DDG and the efficient computation of the dynamic slice, given the DDG. In this paper, we present an efficient debugger, which first computes the DDG efficiently while the program is executing; dynamic slicing is later performed efficiently on the computed DDG, on demand. To minimize program slowdown during the online computation of DDG, we make the design decision of not outputting the computed dependencies to a file, instead, storing them in memory in a specially allocated fixed size circular buffer. The size of the buffer limits the length of the execution history that can be stored. To maximize the execution history that can be maintained, we introduce optimizations to eliminate the storage of most of the generated dependencies, at the same time ensuring that those that are stored are sufficient to capture the bug. Experiments conducted on CPU‐intensive programs show that our optimizations are able to reduce the trace rate from 16 to 0.8 bytes per executed instruction. This enables us to store the dependence trace history for a window of 20 million executed instructions in a 16‐MB buffer. Our debugger is also very efficient, yielding slicing times of around a second, and only slowing down the execution of the program by a factor of 19 during the online tracing step. Using recently proposed architectural support for monitoring, we are also able to handle multithreaded programs running on multicore processors. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
56.
In this paper, we explore a novel idea of using high dynamic range (HDR) technology for uncertainty visualization. We focus on scalar volumetric data sets where every data point is associated with scalar uncertainty. We design a transfer function that maps each data point to a color in HDR space. The luminance component of the color is exploited to capture uncertainty. We modify existing tone mapping techniques and suitably integrate them with volume ray casting to obtain a low dynamic range (LDR) image. The resulting image is displayed on a conventional 8-bits-per-channel display device. The usage of HDR mapping reveals fine details in uncertainty distribution and enables the users to interactively study the data in the context of corresponding uncertainty information. We demonstrate the utility of our method and evaluate the results using data sets from ocean modeling.  相似文献   
57.
In the present study the polyethersulphone (PES) membranes of thickness (35 ±2) μm were prepared by solution cast method. The permeability of these membranes was calculated by varying the temperature and by irradiation of α ions. For the variation of temperature, the gas permeation cell was dipped in a constant temperature water bath in the temperature range from 303–373 K, which is well below the glass transition temperature (498 K). The permeability of H2 and CO2 increased with increasing temperature. The PES membrane was exposed by a-source (95Am241) of strength (1 μ Ci) in vacuum of the order of 10−6 torr, with fluence 2.7 × 107 ions/cm2. The permeability of H2 and CO2 has been observed for irradiated membrane with increasing etching time. The permeability increases with increasing etching time for both gases. There was a sudden change in permeability for both the gases when observed at 18 min etching. At this stage the tracks are visible with optical instrument, which confirms that the pores are generated. Most of pores seen in the micrograph are circular cross-section ones.  相似文献   
58.
We study a linear stochastic approximation algorithm that arises in the context of reinforcement learning. The algorithm employs a decreasing step-size, and is driven by Markov noise with time-varying statistics. We show that under suitable conditions, the algorithm can track the changes in the statistics of the Markov noise, as long as these changes are slower than the rate at which the step-size of the algorithm goes to zero.  相似文献   
59.
Determining service configurations is essential for effective service management. In this paper we describe a model-driven approach for service configuration auto-discovery. We develop metrics for performance and scalability analysis of such auto-discovery mechanisms. Our approach addresses several problems in auto-discovery: specification of what services to discover, how to efficiently distribute service discovery, and how to match instances of services into related groups. We use object-oriented models for discovery specifications, a flexible bus-based architecture for distribution and communication, and a novel multi-phased, instance matching approach. We have applied our approach to typical e-commerce services, Enterprise Resource Planning applications, like SAP, and Microsoft Exchange services running on a mixture of Windows and Unix platforms. The main contribution of this work is the flexibility of our models, architecture and algorithms to address discovery of a multitude of services.  相似文献   
60.
The scheduling of tasks in multiprocessor real-time systems has attracted many researchers in the recent past. Tasks in these systems have deadlines to be met, and most of the real-time scheduling algorithms use worst case computation times to schedule these tasks. Many resources will be left unused if the tasks are dispatched purely based on the schedule produced by these scheduling algorithms, since most of the tasks will take less time to execute than their respective worst case computation times. Resource reclaiming refers to the problem of reclaiming the resources left unused by a real-time task when it takes less time to execute than its worst case computation time. In this paper, we propose two algorithms to reclaim these resources from real-time tasks that are constrained by precedence relations and resource requirements, in shared memory multiprocessor systems. We introduce a notion called a restriction vector for each task which captures its resource and precedence constraints with other tasks. This will help not only in the efficient implementation of the algorithms, but also in obtaining an improvement in performance over the reclaiming algorithms proposed in earlier work [[2]]. We compare our resource reclaiming algorithms with the earlier algorithms and, by experimental studies, show that they reclaim more resources, thereby increasing the guarantee ratio (the ratio of the number of tasks guaranteed to meet their deadlines to the number of tasks that have arrived), which is the basic requirement of any resource reclaiming algorithm. From our simulation studies, we demonstrate that complex reclaiming algorithms with high reclaiming overheads do not lead to an improvement in the guarantee ratio.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号