首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3585篇
  免费   203篇
  国内免费   14篇
工业技术   3802篇
  2023年   26篇
  2022年   47篇
  2021年   93篇
  2020年   79篇
  2019年   71篇
  2018年   118篇
  2017年   103篇
  2016年   115篇
  2015年   99篇
  2014年   140篇
  2013年   225篇
  2012年   219篇
  2011年   273篇
  2010年   195篇
  2009年   226篇
  2008年   209篇
  2007年   188篇
  2006年   129篇
  2005年   101篇
  2004年   99篇
  2003年   95篇
  2002年   84篇
  2001年   43篇
  2000年   52篇
  1999年   60篇
  1998年   111篇
  1997年   83篇
  1996年   61篇
  1995年   29篇
  1994年   33篇
  1993年   28篇
  1992年   30篇
  1991年   21篇
  1990年   18篇
  1989年   18篇
  1988年   23篇
  1987年   15篇
  1986年   25篇
  1985年   22篇
  1984年   18篇
  1983年   12篇
  1982年   12篇
  1981年   10篇
  1980年   17篇
  1979年   14篇
  1977年   12篇
  1976年   22篇
  1974年   12篇
  1973年   10篇
  1972年   7篇
排序方式: 共有3802条查询结果,搜索用时 31 毫秒
91.
Efficient and robust shot change detection   总被引:6,自引:0,他引:6  
In this article, we deal with the problem of shot change detection which is of primary importance when trying to segment and abstract video sequences. Contrary to recent experiments, our aim is to elaborate a robust but very efficient (real-time even with uncompressed data) method to deal with the remaining problems related to shot change detection: illumination changes, context and data independency, and parameter settings. To do so, we have considered some adaptive threshold and derivative measures in a hue-saturation colour space. We illustrate our robust and efficient method by some experiments on news and football broadcast video sequences.
Nicole VincentEmail:
  相似文献   
92.
Anatomical structure modeling from medical images   总被引:2,自引:0,他引:2  
Some clinical applications, such as surgical planning, require volumetric models of anatomical structures represented as a set of tetrahedra. A practical method of constructing anatomical models from medical images is presented. The method starts with a set of contours segmented from the medical images by a clinician and produces a model that has high fidelity with the contours. Unlike most modeling methods, the contours are not restricted to lie on parallel planes. The main steps are a 3D Delaunay tetrahedralization, culling of non-object tetrahedra, and refinement of the tetrahedral mesh. The result is a high-quality set of tetrahedra whose surface points are guaranteed to match the original contours. The key is to use the distance map and bit volume structures that were created along with the contours. The method is demonstrated on computed tomography, MRI and 3D ultrasound data. Models of 170,000 tetrahedra are constructed on a standard workstation in approximately 10s. A comparison with related methods is also provided.  相似文献   
93.
The paper describes a software method to extend ITK (Insight ToolKit, supported by the National Library of Medicine), leading to ITK++. This method, which is based on the extension of the iterator design pattern, allows the processing of regions of interest with arbitrary shapes, without modifying the existing ITK code. We experimentally evaluate this work by considering the practical case of the liver vessel segmentation from CT-scan images, where it is pertinent to constrain processings to the liver area. Experimental results clearly prove the interest of this work: for instance, the anisotropic filtering of this area is performed in only 16 s with our proposed solution, while it takes 52 s using the native ITK framework. A major advantage of this method is that only add-ons are performed: this facilitates the further evaluation of ITK++ while preserving the native ITK framework.  相似文献   
94.
95.
Justification for inclusion dependency normal form   总被引:3,自引:0,他引:3  
Functional dependencies (FDs) and inclusion dependencies (INDs) are the most fundamental integrity constraints that arise in practice in relational databases. In this paper, we address the issue of normalization in the presence of FDs and INDs and, in particular, the semantic justification for an inclusion dependency normal form (IDNF), which combines the Boyce-Codd normal form with the restriction on the INDs that they be noncircular and key-based. We motivate and formalize three goals of database design in the presence of FDs and INDs: noninteraction between FDs and INDs, elimination of redundancy and update anomalies, and preservation of entity integrity. We show that (as for FDs), in the presence of INDs, being free of redundancy is equivalent to being free of update anomalies. Then, for each of these properties, we derive equivalent syntactic conditions on the database design. Individually, each of these syntactic conditions is weaker than IDNF and the restriction that an FD is not embedded in the right-hand side of an IND is common to three of the conditions. However, we also show that, for these three goals of database design to be satisfied simultaneously, IDNF is both a necessary and a sufficient condition  相似文献   
96.
Analysis of head pose accuracy in augmented reality   总被引:1,自引:0,他引:1  
A method is developed to analyze the accuracy of the relative head-to-object position and orientation (pose) in augmented reality systems with head-mounted displays. From probabilistic estimates of the errors in optical tracking sensors, the uncertainty in head-to-object pose can be computed in the form of a covariance matrix. The positional uncertainty can be visualized as a 3D ellipsoid. One useful benefit of having an explicit representation of uncertainty is that we can fuse sensor data from a combination of fixed and head-mounted sensors in order to improve the overall registration accuracy. The method was applied to the analysis of an experimental augmented reality system, incorporating an optical see-through head-mounted display, a head-mounted CCD camera, and a fixed optical tracking sensor. The uncertainty of the pose of a movable object with respect to the head-mounted display was analyzed. By using both fixed and head mounted sensors, we produced a pose estimate that is significantly more accurate than that produced by either sensor acting alone  相似文献   
97.
We propose a new and simple method for the measurement of microbial concentrations in highly diluted cultures. This method is based on an analysis of the intensity fluctuations of light scattered by microbial cells under laser illumination. Two possible measurement strategies are identified and compared using simulations and measurements of the concentration of gold nanoparticles. Based on this comparison, we show that the concentration of Escherichia coli and Saccharomyces cerevisiae cultures can be easily measured in situ across a concentration range that spans five orders of magnitude. The lowest measurable concentration is three orders of magnitude (1000×) smaller than in current optical density measurements. We show further that this method can also be used to measure the concentration of fluorescent microbial cells. In practice, this new method is well suited to monitor the dynamics of population growth at early colonization of a liquid culture medium. The dynamic data thus obtained are particularly relevant for microbial ecology studies.  相似文献   
98.
This study proposes an alternative to the conventional empirical analysis approach for evaluating the relative efficiency of distinct combinations of algorithmic operators and/or parameter values of genetic algorithms (GAs) on solving the pickup and delivery vehicle routing problem with soft time windows (PDVRPSTW). Our approach considers each combination as a decision-making unit (DMU) and adopts data envelopment analysis (DEA) to determine the relative and cross efficiencies of each combination of GA operators and parameter values on solving the PDVRPSTW. To demonstrate the applicability and advantage of this approach, we implemented a number of combinations of GA’s three main algorithmic operators, namely selection, crossover and mutation, and employed DEA to evaluate and rank the relative efficiencies of these combinations. The numerical results show that DEA is well suited for determining the efficient combinations of GA operators. Among the combinations under consideration, the combinations using tournament selection and simple crossover are generally more efficient. The proposed approach can be adopted to evaluate the relative efficiency of other meta-heuristics, so it also contributes to the algorithm development and evaluation for solving combinatorial optimization problems from the operational research perspective.  相似文献   
99.
In this work, a variational Bayesian framework for efficient training of echo state networks (ESNs) with automatic regularization and delay&sum (D&S) readout adaptation is proposed. The algorithm uses a classical batch learning of ESNs. By treating the network echo states as fixed basis functions parameterized with delay parameters, we propose a variational Bayesian ESN training scheme. The variational approach allows for a seamless combination of sparse Bayesian learning ideas and a variational Bayesian space-alternating generalized expectation-maximization (VB-SAGE) algorithm for estimating parameters of superimposed signals. While the former method realizes automatic regularization of ESNs, which also determines which echo states and input signals are relevant for "explaining" the desired signal, the latter method provides a basis for joint estimation of D&S readout parameters. The proposed training algorithm can naturally be extended to ESNs with fixed filter neurons. It also generalizes the recently proposed expectation-maximization-based D&S readout adaptation method. The proposed algorithm was tested on synthetic data prediction tasks as well as on dynamic handwritten character recognition.  相似文献   
100.
Due to the large variety in computing resources and, consequently, the large number of different types of service level agreements (SLAs), computing resource markets face the problem of a low market liquidity. Restricting the number of different resource types to a small set of standardized computing resources seems to be the appropriate solution to counteract this problem. Standardized computing resources are defined through an SLA template. An SLA template defines the structure of an SLA, the service attributes, the names of the service attributes, and the service attribute values. However, since existing research results have only introduced static SLA templates so far, the SLA templates cannot reflect changes in user needs and market structures. To address this shortcoming, we present a novel approach of adaptive SLA matching. This approach adapts SLA templates based on SLA mappings of users. It allows Cloud users to define mappings between a public SLA template, which is available in the Cloud market, and their private SLA templates, which are used for various in-house business processes of the Cloud user. Besides showing how public SLA templates are adapted to the demand of Cloud users, we also analyze the costs and benefits of this approach. Costs are incurred every time a user has to define a new SLA mapping to a public SLA template due to its adaptation. In particular, we investigate how the costs differ with respect to the public SLA template adaptation method. The simulation results show that the use of heuristics within adaptation methods allows balancing the costs and benefits of the SLA mapping approach.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号