首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   291篇
  免费   4篇
工业技术   295篇
  2022年   3篇
  2021年   3篇
  2020年   3篇
  2019年   2篇
  2018年   6篇
  2017年   4篇
  2016年   4篇
  2015年   2篇
  2014年   6篇
  2013年   19篇
  2012年   8篇
  2011年   6篇
  2010年   8篇
  2009年   7篇
  2008年   10篇
  2007年   8篇
  2006年   10篇
  2005年   16篇
  2004年   7篇
  2003年   10篇
  2002年   10篇
  2001年   6篇
  2000年   3篇
  1999年   6篇
  1998年   19篇
  1997年   14篇
  1996年   8篇
  1995年   2篇
  1994年   2篇
  1993年   6篇
  1992年   5篇
  1991年   5篇
  1990年   3篇
  1989年   2篇
  1988年   4篇
  1987年   5篇
  1986年   3篇
  1985年   3篇
  1984年   4篇
  1983年   2篇
  1982年   5篇
  1981年   4篇
  1980年   2篇
  1979年   3篇
  1977年   4篇
  1976年   5篇
  1975年   3篇
  1973年   2篇
  1971年   2篇
  1968年   2篇
排序方式: 共有295条查询结果,搜索用时 0 毫秒
61.
A face recognition system must recognize a face from a novel image despite the variations between images of the same face. A common approach to overcoming image variations because of changes in the illumination conditions is to use image representations that are relatively insensitive to these variations. Examples of such representations are edge maps, image intensity derivatives, and images convolved with 2D Gabor-like filters. Here we present an empirical study that evaluates the sensitivity of these representations to changes in illumination, as well as viewpoint and facial expression. Our findings indicated that none of the representations considered is sufficient by itself to overcome image variations because of a change in the direction of illumination. Similar results were obtained for changes due to viewpoint and expression. Image representations that emphasized the horizontal features were found to be less sensitive to changes in the direction of illumination. However, systems based only on such representations failed to recognize up to 20 percent of the faces in our database. Humans performed considerably better under the same conditions. We discuss possible reasons for this superiority and alternative methods for overcoming illumination effects in recognition  相似文献   
62.
Expressions for the transition amplitudes of the centerband and sidebands of magic angle spinning (MAS) NMR spectra are derived by applying the Floquet theory. Signal amplitudes are defined in terms of transition amplitude operators, which are linear combinations of the Floquet operators. Two examples of the utilization of the Floquet approach for the evaluation of MAS signals are presented. In the first, the REDOR experiment on heteronuclear spin systems is discussed. The effects of finite pulse lengths on the REDOR signals are also derived. The second example deals with the MAS spectrum of a spin coupled to a heteronuclear spin that is irradiated by an rf field. Recoupled spectra are examined with the help of the Floquet theory and decoupled spectra are evaluated. In all cases the methodology of the Floquet approach is emphasized.  相似文献   
63.
64.
Suppose a directed graph has its arcs stored in secondary memory, and we wish to compute its transitive closure, also storing the result in secondary memory. We assume that an amount of main memory capable of holdings values is available, and thats lies betweenn, the number of nodes of the graph, ande, the number of arcs. The cost measure we use for algorithms is theI/O complexity of Kung and Hong, where we count 1 every time a value is moved into main memory from secondary memory, or vice versa.In the dense case, wheree is close ton 2, we show that I/O equal toO(n 3/s) is sufficient to compute the transitive closure of ann-node graph, using main memory of sizes. Moreover, it is necessary for any algorithm that is standard, in a sense to be defined precisely in the paper. Roughly, standard means that paths are constructed only by concatenating arcs and previously discovered paths. For the sparse case, we show that I/O equal toO(n 2e/s) is sufficient, although the algorithm we propose meets our definition of standard only if the underlying graph is acyclic. We also show that(n 2e/s) is necessary for any standard algorithm in the sparse case. That settles the I/O complexity of the sparse/acyclic case, for standard algorithms. It is unknown whether this complexity can be achieved in the sparse, cyclic case, by a standard algorithm, and it is unknown whether the bound can be beaten by nonstandard algorithms.We then consider a special kind of standard algorithm, in which paths are constructed only by concatenating arcs and old paths, never by concatenating two old paths. This restriction seems essential if we are to take advantage of sparseness. Unfortunately, we show that almost another factor ofn I/O is necessary. That is, there is an algorithm in this class using I/OO(n 3e/s) for arbitrary sparse graphs, including cyclic ones. Moreover, every algorithm in the restricted class must use(n 3e/s/log3 n) I/O, on some cyclic graphs.The work of this author was partially supported by NSF grant IRI-87-22886, IBM contract 476816, Air Force grant AFOSR-88-0266 and a Guggenheim fellowship.  相似文献   
65.
We consider the parallel time complexity of logic programs without function symbols, called logical query programs, or Datalog programs. We give a PRAM algorithm for computing the minimum model of a logical query program, and show that for programs with the polynomial fringe property, this algorithm runs in time that is logarithmic in the input size, assuming that concurrent writes are allowed if they are consistent. As a result, the linear and piecewise linear classes of logic programs are inN C. Then we examine several nonlinear classes in which the program has a single recursive rule that is an elementary chain. We show that certain nonlinear programs are related to GSM mappings of a balanced parentheses language, and that this relationship implies the polynomial fringe property; hence such programs are inN C Finally, we describe an approach for demonstrating that certain logical query programs are log space complete forP, and apply it to both elementary single rule programs and nonelementary programs.Supported by NSF Grant IST-84-12791, a grant of IBM Corporation, and ONR contract N00014-85-C-0731.  相似文献   
66.
In this work, we address a relatively unexplored aspect of designing agents that learn from human reward. We investigate how an agent’s non-task behavior can affect a human trainer’s training and agent learning. We use the TAMER framework, which facilitates the training of agents by human-generated reward signals, i.e., judgements of the quality of the agent’s actions, as the foundation for our investigation. Then, starting from the premise that the interaction between the agent and the trainer should be bi-directional, we propose two new training interfaces to increase a human trainer’s active involvement in the training process and thereby improve the agent’s task performance. One provides information on the agent’s uncertainty which is a metric calculated as data coverage, the other on its performance. Our results from a 51-subject user study show that these interfaces can induce the trainers to train longer and give more feedback. The agent’s performance, however, increases only in response to the addition of performance-oriented information, not by sharing uncertainty levels. These results suggest that the organizational maxim about human behavior, “you get what you measure”—i.e., sharing metrics with people causes them to focus on optimizing those metrics while de-emphasizing other objectives—also applies to the training of agents. Using principle component analysis, we show how trainers in the two conditions train agents differently. In addition, by simulating the influence of the agent’s uncertainty–informative behavior on a human’s training behavior, we show that trainers could be distracted by the agent sharing its uncertainty levels about its actions, giving poor feedback for the sake of reducing the agent’s uncertainty without improving the agent’s performance.  相似文献   
67.
E-services are provided by several web-enabled companies. While some of them are competitive in nature, others are collaborative. There are few methodologies currently existing for measuring the value of e-collaboration among these e-service providers. The objective of this research is to measure collaborative intelligence (CI) in the knowledge based service (KBS) industry and to identify measures for finding the best collaborators during the formation and functioning stages of collaborative networks. The model developed in this research, CIMK (collaborative intelligence measure of KBS), measures CI by multi-objective optimization on parameters for collaboration, and suggests optimal operating points for various clients with greater flexibility. CIMK allows decision makers to customize the model based on their knowledge of the industry. Then, the CNOA (Collaborative Network Optimization Algorithm) is applied to select the best providers for requests based on their CI levels. CNOA has been implemented over a HUB-CI (HUB with CI) platform, which is a next generation collaboration support system developed at Purdue University. Three analytic experiments are designed and performed to validate the models (1) in terms of usability, (2) to compare the CIMK with alternative methods, and (3) to find the relative advantages of the CIMK model. The results of the experiments indicate that the average service cost can decrease by close to 50% when operating points with high CI, suggested by CIMK model, are implemented. The CI level computed by CIMK is successfully used as a decision parameter for on-going matching e-service providers to different requests.  相似文献   
68.
A small-size broadband ultraviolet lamp with an emission spectrum of 206–390 nm, which is excited by a dc glow discharge, is described. The discharge was ignited in a quartz discharge tube with an inner diameter of 1.4 cm and an anode-cathode spacing of 10 cm. The tube was filled with a Kr/Xe/Br2/I2 working mixture, the total pressure being 0.5–2.0 kPa. The lamp’s emission spectrum consisted of a 206.2-nm iodine atomic line 0.1 nm wide at half-height and a continuum in a spectral region of 210–390 nm. The continuum resulted from overlapping of wide emission bands with peaks at 221 nm for XeBr(D-X), 253 nm for XeI(B-X), 282 nm for XeBr(B-X), 289 nm for Br 2 * , 342 nm for I 2 * , and 386 nm for IBr*. The total power of the ultraviolet emission was no more than 8–12 W, the power injected into the discharge being 10–100 W. The lamp lifetime in the gas-static mode was 300–400 h.  相似文献   
69.
This study evaluates the influence of particle size, PEGylation, and surface coating on the quantitative biodistribution of near‐infrared‐emitting quantum dots (QDs) in mice. Polymer‐ or peptide‐coated 64Cu‐labeled QDs 2 or 12 nm in diameter, with or without polyethylene glycol (PEG) of molecular weight 2000, are studied by serial micropositron emission tomography imaging and region‐of‐interest analysis, as well as transmission electron microscopy and inductively coupled plasma mass spectrometry. PEGylation and peptide coating slow QD uptake into the organs of the reticuloendothelial system (RES), liver and spleen, by a factor of 6–9 and 2–3, respectively. Small particles are in part renally excreted. Peptide‐coated particles are cleared from liver faster than physical decay alone would suggest. Renal excretion of small QDs and slowing of RES clearance by PEGylation or peptide surface coating are encouraging steps toward the use of modified QDs for imaging living subjects.  相似文献   
70.
The assumption that the thermal effect (heating) is the sole factor that should be considered when a microwave source is applied has been debated by many reports, often claiming that athermal (non-thermal) effects exist as well. Such effects are claimed to change the chemical, biochemical, or the physical behaviour of some systems while the temperature and all other parameters remain unaltered. The possibility of an athermal effect was tested in a number of chemical, biological and physical systems in a very well controlled, high radiation intensity system (2.45 GHz, up to 1000 W/kg, with continuous radiation up to 48 h). The systems that were tested included: Maillard reaction, protein denaturation and polymer solubility, mutagenesis of bacteria, mutarotation equilibrium of α/β-d-glucose, and saturation solubility of NaCl. All data failed to show any significant athermal effects. The results of this study are in contrast to what has been previously reported for some of the tested systems.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号