首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   43066篇
  免费   902篇
  国内免费   182篇
工业技术   44150篇
  2024年   55篇
  2023年   190篇
  2022年   122篇
  2021年   111篇
  2018年   459篇
  2017年   672篇
  2016年   1049篇
  2015年   780篇
  2014年   420篇
  2013年   423篇
  2012年   2141篇
  2011年   2443篇
  2010年   671篇
  2009年   764篇
  2008年   610篇
  2007年   648篇
  2006年   567篇
  2005年   3350篇
  2004年   2570篇
  2003年   2055篇
  2002年   852篇
  2001年   737篇
  2000年   280篇
  1999年   628篇
  1998年   6180篇
  1997年   3821篇
  1996年   2515篇
  1995年   1454篇
  1994年   1081篇
  1993年   1103篇
  1992年   246篇
  1991年   307篇
  1990年   303篇
  1989年   278篇
  1988年   297篇
  1987年   222篇
  1986年   199篇
  1985年   169篇
  1984年   71篇
  1983年   83篇
  1982年   131篇
  1981年   179篇
  1980年   192篇
  1979年   60篇
  1978年   101篇
  1977年   609篇
  1976年   1321篇
  1975年   98篇
  1973年   46篇
  1971年   49篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
991.
Software systems assembled from a large number of autonomous components become an interesting target for formal verification due to the issue of correct interplay in component interaction. State/event LTL (Chaki et al. (2004, 2005) [1] and [2]) incorporates both states and events to express important properties of component-based software systems.The main contribution of this paper is a partial order reduction technique for verification of state/event LTL properties. The core of the partial order reduction is a novel notion of stuttering equivalence which we call state/event stuttering equivalence. The positive attribute of the equivalence is that it can be resolved with existing methods for partial order reduction. State/event LTL properties are, in general, not preserved under state/event stuttering equivalence. To this end we define a new logic, called weak state/event LTL, which is invariant under the new equivalence.To bring some evidence of the method’s efficiency, we present some of the results obtained by employing the partial order reduction technique within our tool for verification of component-based systems modelled using the formalism of component-interaction automata (Brim et al. (2005) [3]).  相似文献   
992.
Flash memory efficient LTL model checking   总被引:1,自引:0,他引:1  
As the capacity and speed of flash memories in form of solid state disks grow, they are becoming a practical alternative for standard magnetic drives. Currently, most solid-state disks are based on NAND technology and much faster than magnetic disks in random reads, while in random writes they are generally not.So far, large-scale LTL model checking algorithms have been designed to employ external memory optimized for magnetic disks. We propose algorithms optimized for flash memory access. In contrast to approaches relying on the delayed detection of duplicate states, in this work, we design and exploit appropriate hash functions to re-invent immediate duplicate detection.For flash memory efficient on-the-fly LTL model checking, which aims at finding any counter-example to the specified LTL property, we study hash functions adapted to the two-level hierarchy of RAM and flash memory. For flash memory efficient off-line LTL model checking, which aims at generating a minimal counterexample and scans the entire state space at least once, we analyze the effect of outsourcing a memory-based perfect hash function from RAM to flash memory.Since the characteristics of flash memories are different to magnetic hard disks, the existing I/O complexity model is no longer sufficient. Therefore, we provide an extended model for the computation of the I/O complexity adapted to flash memories that has a better fit to the observed behavior of our algorithms.  相似文献   
993.
994.
Auction processes are commonly employed in many environments. With rapid advances in Internet and computing technologies, electronic auctions have become very popular. People sell and buy a wide range of goods and services online. There is a growing need for the proper management of online auctions and for providing support to parties involved. In this paper, we develop an interactive approach supporting both the buyer and the bidders in a multi-attribute, single-item, multi-round, reverse auction environment. We demonstrate the algorithm on a number of problems.  相似文献   
995.
In this paper we present a new thermographic image database, suitable for the analysis of automatic focusing measures. This database contains the images of 10 scenes, each of which is represented once for each of 96 different focus positions. Using this database, we evaluate the usefulness of five focus measures with the goal of determining the optimal focus position. Experimental results reveal that the accurate automatic detection of optimal focus position can be achieved with a low computational burden. We also present an acquisition tool for obtaining thermal images. To the best of our knowledge, this is the first study on the automatic focusing of thermal images.  相似文献   
996.
The classical model selection criteria, such as the Bayesian Information Criterion (BIC) or Akaike information criterion (AIC), have a strong tendency to overestimate the number of regressors when the search is performed over a large number of potential explanatory variables. To handle the problem of the overestimation, several modifications of the BIC have been proposed. These versions rely on supplementing the original BIC with some prior distributions on the class of possible models. Three such modifications are presented and compared in the context of sparse Generalized Linear Models (GLMs). The related choices of priors are discussed and the conditions for the asymptotic equivalence of these criteria are provided. The performance of the modified versions of the BIC is illustrated with an extensive simulation study and a real data analysis. Also, simplified versions of the modified BIC, based on least squares regression, are investigated.  相似文献   
997.
This work introduces a new algorithm for surface reconstruction in ℝ3 from spatially arranged one-dimensional cross sections embedded in ℝ3. This is generally the case with acoustic signals that pierce an object non-destructively. Continuous deformations (homotopies) that smoothly reconstruct information between any pair of successive cross sections are derived. The zero level set of the resulting homotopy field generates the desired surface. Four types of homotopies are suggested that are well suited to generate a smooth surface. We also provide derivation of necessary higher order homotopies that can generate a C 2 surface. An algorithm to generate surface from acoustic sonar signals is presented with results. Reconstruction accuracies of the homotopies are compared by means of simulations performed on basic geometric primitives.  相似文献   
998.
We shall present an algorithm for determining whether or not a given planar graph H can ever be a subgraph of a 4-regular planar graph. The algorithm has running time O(|H|2.5) and can be used to find an explicit 4-regular planar graph GH if such a graph exists. It shall not matter whether we specify that H and G must be simple graphs or allow them to be multigraphs.  相似文献   
999.
Despite the ability of current GPU processors to treat heavy parallel computation tasks, its use for solving medical image segmentation problems is still not fully exploited and remains challenging. A lot of difficulties may arise related to, for example, the different image modalities, noise and artifacts of source images, or the shape and appearance variability of the structures to segment. Motivated by practical problems of image segmentation in the medical field, we present in this paper a GPU framework based on explicit discrete deformable models, implemented over the NVidia CUDA architecture, aimed for the segmentation of volumetric images. The framework supports the segmentation in parallel of different volumetric structures as well as interaction during the segmentation process and real-time visualization of the intermediate results. Promising results in terms of accuracy and speed on a real segmentation experiment have demonstrated the usability of the system.  相似文献   
1000.
Non-photorealistic (illustrative) rendering augments typical rendering models to selectively emphasize or de-emphasize specific structures of rendered objects. Illustrative techniques may affect not only the rendering style of specific portions of an object but also their visibility, ensuring that less important regions do not occlude more important ones. Cutaway views completely remove occluding, unimportant structures—possibly also removing valuable context information—while existing solutions for smooth reduction of occlusion based on importance lack precise visibility control, simplicity and generality. We introduce a new front-to-back fragment composition equation that directly takes into account a measure of sample importance and allows smooth and precise importance-based visibility control. We demonstrate the generality of our composition equation with several illustrative effects, which were obtained by using a set of importance measures calculated on the fly or defined by the user. The presented composition method is suitable for direct volume rendering as well as rendering of layered 3D models. We discuss both cases and show examples, though focusing mainly on illustration of volumetric data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号