首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   31929篇
  免费   1600篇
  国内免费   34篇
工业技术   33563篇
  2023年   256篇
  2022年   193篇
  2021年   692篇
  2020年   508篇
  2019年   595篇
  2018年   724篇
  2017年   693篇
  2016年   910篇
  2015年   816篇
  2014年   1037篇
  2013年   1936篇
  2012年   1680篇
  2011年   2132篇
  2010年   1523篇
  2009年   1469篇
  2008年   1772篇
  2007年   1664篇
  2006年   1455篇
  2005年   1240篇
  2004年   1105篇
  2003年   957篇
  2002年   916篇
  2001年   549篇
  2000年   529篇
  1999年   537篇
  1998年   510篇
  1997年   460篇
  1996年   467篇
  1995年   448篇
  1994年   432篇
  1993年   445篇
  1992年   404篇
  1991年   244篇
  1990年   328篇
  1989年   321篇
  1988年   271篇
  1987年   291篇
  1986年   287篇
  1985年   305篇
  1984年   268篇
  1983年   257篇
  1982年   244篇
  1981年   212篇
  1980年   179篇
  1979年   184篇
  1978年   167篇
  1977年   141篇
  1976年   133篇
  1975年   139篇
  1974年   114篇
排序方式: 共有10000条查询结果,搜索用时 421 毫秒
101.
A recently proposed argument to explain the improved performance of the eight-point algorithm that results from using normalized data (Chojnacki, W., et al. in IEEE Trans. Pattern Anal. Mach. Intell. 25(9):1172–1177, 2003) relies upon adoption of a certain model for statistical data distribution. Under this model, the cost function that underlies the algorithm operating on the normalized data is statistically more advantageous than the cost function that underpins the algorithm using unnormalized data. Here we extend this explanation by introducing a more refined, structured model for data distribution. Under the extended model, the normalized eight-point algorithm turns out to be approximately consistent in a statistical sense. The proposed extension provides a link between the existing statistical rationalization of the normalized eight-point algorithm and the approach of Mühlich and Mester for enhancing total least squares estimation methods via equilibration. The paper forms part of a wider effort to rationalize and interrelate foundational methods in vision parameter estimation.  相似文献   
102.
An increasing number of connectionist models have been proposed to explain behavioral deficits in developmental disorders. These simulations motivate serious consideration of the theoretical implications of the claim that a developmental disorder fits within the parameter space of a particular computational model of normal development. The authors examine these issues in depth with respect to a series of new simulations investigating past-tense formation in Williams syndrome. This syndrome and the past-tense domain are highly relevant because both have been used to make strong theoretical claims about the processes underlying normal language acquisition. The authors conclude that computational models have great potential to advance psychologists' understanding of developmental deficits because they focus on the developmental process itself as a pivotal causal factor in producing atypical phenotypic outcomes. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
103.
The current study investigated the impact of requiring respondents to elaborate on their answers to a biodata measure on mean scores, the validity of the biodata item composites, subgroup mean differences, and correlations with social desirability. Results of this study indicate that elaborated responses result in scores that are much lower than nonelaborated responses to the same items by an independent sample. Despite the lower mean score on elaborated items, it does not appear that elaboration affects the size of the correlation between social desirability and responses to biodata items or that it affects criterion-related validity or subgroup mean differences in a practically significant way. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
104.
The paper deals with a physical model for the calculation of the strip temperature in a hot rolling mill. Beside the heat transfer from the strip to the environment in the different mill sections a sub‐model for the material properties of steel including phase transitions is introduced. For the application in a process automation system an online adaptation of the model to the current state of the mill is indispensable. The adaptation is explained in detail.  相似文献   
105.
Three experiments investigated the effects of varying the conditioned stimulus (CS) duration between training and extinction. Ring doves (Streptopelia risoria) were autoshaped on a fixed CS-unconditioned stimulus (US) interval and extinguished with CS presentations that were longer, shorter, or the same as the training duration. During a subsequent test session, the training CS duration was reintroduced. Results suggest that the cessation of responding during an extinction session is controlled by generalization of excitation between the training and extinction CSs and by the number of nonreinforced CS presentations. Transfer of extinction to the training CS is controlled by the similarity between the extinction and training CSs. Extinction learning is temporally specific. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
106.
107.
In this paper we consider several variants of Valiant's learnability model that have appeared in the literature. We give conditions under which these models are equivalent in terms of the polynomially learnable concept classes they define. These equivalences allow comparisons of most of the existing theorems in Valiant-style learnability and show that several simplifying assumptions on polynomial learning algorithms can be made without loss of generality. We also give a useful reduction of learning problems to the problem of finding consistent hypotheses, and give comparisons and equivalences between Valiant's model and the prediction learning models of Haussler, Littlestone, and Warmuth (in “29th Annual IEEE Symposium on Foundations of Computer Science,” 1988).  相似文献   
108.
A sensor-driven control model and a minimum effort control algorithm in terms of time and energy expended during the execution of a movement strategy are described and validated for a multijointed cooperating robotic manipulator. Considering smooth, human-like (anthropomorphic) movements, using joint motion profiles achievable in real time as well as sensory information from all joints, and evaluating the total work expended by each manipulator joint during the execution of a movement strategy, a minimum effort motion trajectory is synthesized to precisely and efficiently position the robotic arm end-effector. This sensor-based approach significantly reduces the computational requirements for such cooperative motion. The minimum effort control algorithm generates several human-like arm movement strategies and selects the best strategy on the basis of expendable effort. The algorithm has an inherent basis to deal with obstacles in an efficient way. Detailed examples are described from the simulation studies. © 1994 John Wiley & Sons, Inc.  相似文献   
109.
Chemical mechanical polishing of polymer films   总被引:2,自引:0,他引:2  
Strategies to reduce capacitance effects associated with shrinking integrated circuit (IC) design rules include incorporating low resistivity metals and insulators with low dielectric values, or “low-κ” materials. Using such materials in current IC fabrication schemes necessitates the development of reliable chemical mechanical polishing (CMP) processes and process consumables tailored for them. Here we present results of CMP experiments performed on FLARE™ 2.0 using a specialized zirconium oxide (ZrO2) polishing slurry. FLARE™ 2.0 is a poly(arylene) ether from AlliedSignal, Inc. with a nominal dielectric constant of 2.8. In addition, we provide insight into possible removal mechanisms during the CMP of organic polymers by examining the performance of numerous abrasive slurries. Although specific to a limited number of polymers, the authors suggest that the information presented in this paper is relevant to the CMP performance of many polymer dielectric materials.  相似文献   
110.
In this paper, we re-examine the results of prior work on methods for computing ad hoc joins. We develop a detailed cost model for predicting join algorithm performance, and we use the model to develop cost formulas for the major ad hoc join methods found in the relational database literature. We show that various pieces of “common wisdom” about join algorithm performance fail to hold up when analyzed carefully, and we use our detailed cost model to derive op timal buffer allocation schemes for each of the join methods examined here. We show that optimizing their buffer allocations can lead to large performance improvements, e.g., as much as a 400% improvement in some cases. We also validate our cost model's predictions by measuring an actual implementation of each join algorithm considered. The results of this work should be directly useful to implementors of relational query optimizers and query processing systems. Edited by M. Adiba. Received May 1993 / Accepted April 1996  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号