首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1287篇
  免费   40篇
工业技术   1327篇
  2023年   6篇
  2022年   8篇
  2021年   38篇
  2020年   18篇
  2019年   18篇
  2018年   20篇
  2017年   20篇
  2016年   31篇
  2015年   18篇
  2014年   43篇
  2013年   58篇
  2012年   51篇
  2011年   88篇
  2010年   54篇
  2009年   50篇
  2008年   74篇
  2007年   62篇
  2006年   52篇
  2005年   50篇
  2004年   45篇
  2003年   26篇
  2002年   27篇
  2001年   26篇
  2000年   26篇
  1999年   21篇
  1998年   88篇
  1997年   59篇
  1996年   38篇
  1995年   23篇
  1994年   22篇
  1993年   18篇
  1991年   9篇
  1990年   14篇
  1989年   7篇
  1988年   10篇
  1987年   10篇
  1986年   4篇
  1985年   7篇
  1984年   8篇
  1983年   11篇
  1982年   5篇
  1981年   5篇
  1980年   11篇
  1979年   2篇
  1978年   4篇
  1977年   6篇
  1976年   12篇
  1975年   8篇
  1972年   3篇
  1968年   3篇
排序方式: 共有1327条查询结果,搜索用时 15 毫秒
991.
This paper describes changes made to the Pascal-P compiler in order to improve the efficiency of its implementation on a single-accumulator one-address computer, the PRIME 300. The aim of the project was to develop a true compiler rather than a threaded code or pure interpretation system. A comparison of timings for these three methods of implementing the Pascal-P compiler is also presented.  相似文献   
992.
Deterioration in quality of palm kernels manifests itself in the development of high acidities due to inadequate drying, kernel breakage, microbiological activity and insect attack; also by the development of brown endosperms due to overheating either during fruit sterilisation or due to biological heating resulting from unsatisfactory storage conditions of both uncracked nuts and kernels. There are indications that the incidence of such quality deterioration will increase with increasing mechanisation of the industry in the producing territories unless care is taken. Ways in which the defects can be overcome are outlined. The development of brown endosperms is due to the Maillard reaction between free amino-groups of the proteins and sugars present in the kernels and results in the proteins being denatured. In severe browning, which produces oil-soluble melanoidins, there is evidence that these are produced from the reactions of protein amino-groups with α,β-unsaturated aldehydes resulting from the oxidative splitting of unsaturated fatty acids.  相似文献   
993.
In this paper, we present an overview on the design of algorithms for iterative detection over channels with memory. The starting point for all the algorithms is the implementation of soft-input soft-ouput maximum a posteriori (MAP) symbol detection strategies for transmissions over channels encompassing unknown parameters, either stochastic or deterministic. The proposed solutions represent effective ways to reach this goal. The described algorithms are grouped into three categories: i) we first introduce algorithms for adaptive iterative detection, where the unknown channel parameters are explicitly estimated; ii) then, we consider finite-memory iterative detection algorithms, based on ad hoc truncation of the channel memory and often interpretable as based on an implicit estimation of the channel parameters; and iii) finally, we present a general detection-theoretic approach to derive optimal detection algorithms with polynomial complexity. A few illustrative numerical results are also presented.  相似文献   
994.
We investigate the issue of designing a kernel programming language for mobile computing and describe KLAIM, a language that supports a programming paradigm where processes, like data, can be moved from one computing environment to another. The language consists of a core Linda with multiple tuple spaces and of a set of operators for building processes. KLAIM naturally supports programming with explicit localities. Localities are first-class data (they can be manipulated like any other data), but the language provides coordination mechanisms to control the interaction protocols among located processes. The formal operational semantics is useful for discussing the design of the language and provides guidelines for implementations. KLAIM is equipped with a type system that statically checks access right violations of mobile agents. Types are used to describe the intentions (read, write, execute, etc.) of processes in relation to the various localities. The type system is used to determine the operations that processes want to perform at each locality, and to check whether they comply with the declared intentions and whether they have the necessary rights to perform the intended operations at the specific localities. Via a series of examples, we show that many mobile code programming paradigms can be naturally implemented in our kernel language. We also present a prototype implementation of KLAIM in Java  相似文献   
995.
Mercury is a globally dispersed and toxic pollutant that can be transported far from its emission sources. In polar and subpolar regions, recent research activities have demonstrated its ability to be converted and deposited rapidly onto snow surfaces during the so-known Mercury Depletion Events (MDEs). The fate of mercury once deposited onto snow surfaces is still unclear: a part could be re-emitted to the atmosphere, the other part could contaminate water systems at the snowmelt. Its capacity to transform to more toxic form and to bioaccumulate in the food chain has consequently made mercury a threat for Arctic ecosystems. The snowpack is a medium that greatly interacts with a variety of atmospheric gases. Its role in the understanding of the fate of deposited mercury is crucial though it is poorly understood. In April 2002, we studied an environmental component of mercury, which is interstitial gaseous mercury (IGM) present in the air of the snowpack at Kuujjuarapik/Whapmagoostui (55 degrees N, 77 degrees W), Canada on the east shore of the Hudson Bay. We report here for the first time continuous IGM measurements at various depths inside a seasonal snowpack. IGM concentrations exhibit a well-marked diurnal cycle with uninterrupted events of Hg0 depletion and production within the snowpack. A possible explanation of Hg0 depletion within the snowpack may be Hg0 oxidation processes. Additionally, we assume that the notable production of Hg0 during the daytime may be the results of photoreduction and photoinitiated reduction of Hg(II) complexes. These new observations show that the snowpack plays undoubtedly a role in the global mercury cycle.  相似文献   
996.
While much research attention has been paid to transitioning from requirements to software architectures, relatively little attention has been paid to how new requirements are affected by an existing system architecture. Specifically, no scientific studies have been conducted on the “characteristic” differences between the newly elicited requirements gathered in the presence or absence of an existing software architecture. This paper describes an exploratory controlled study investigating such requirements characteristics. We identify a multitude of characteristics (e.g., end-user focus, technological focus, and importance) that were affected by the presence or absence of an SA, together with the extent of this effect. Furthermore, we identify the specific aspects of the architecture that had an impact on the characteristics. The study results have implications for RE process engineering, post-requirements analysis, requirements engineering tools, traceability management, and future empirical work in RE based on several emergent hypotheses resultant from this study.  相似文献   
997.
This paper investigates the comparative performance of several information-driven search strategies and decision rules using a canonical target classification problem. Five sensor models are considered: one obtained from classical estimation theory and four obtained from Bernoulli, Poisson, binomial, and mixture-of-binomial distributions. A systematic approach is presented for deriving information functions that represent the expected utility of future sensor measurements from mutual information, Rènyi divergence, Kullback-Leibler divergence, information potential, quadratic entropy, and the Cauchy-Schwarz distance. The resulting information-driven strategies are compared to direct-search, alert-confirm, task-driven (TS), and log-likelihood-ratio (LLR) search strategies. Extensive numerical simulations show that quadratic entropy typically leads to the most effective search strategy with respect to correct-classification rates. In the presence of prior information, the quadratic-entropy-driven strategy also displays the lowest rate of false alarms. However, when prior information is absent or very noisy, TS and LLR strategies achieve the lowest false-alarm rates for the Bernoulli, mixture-of-binomial, and classical sensor models.  相似文献   
998.
This paper addresses the problem of engineering energy-efficient target detection applications, using unattended Wireless Sensor Networks (WSNs) with random node deployment and partial coverage, for long-lasting surveillance of areas of interest. As battery energy depletion is a crucial issue, an effective approach consists in switching on and off, according to proper duty cycles, sensing and communication modules of wireless sensor nodes. Making these modules work in an intermittent fashion has an impact on (i) the latency of notification transmission (depending on the communication duty cycle), (ii) the probability of missed target detection (depending on the number of deployed nodes, the sensing duty cycle, and the number of incoming targets), and (iii) the delay in detecting an incoming target. In order to optimize the system parameters to reach given performance objectives, we first derive an analytical framework which allows us to evaluate the probability of missed target detection (in the presence of either single or multiple incoming targets), the notification transmission latency, the detection delay, and the network lifetime. Then, we show how this “toolbox” can be used to optimally configure system parameters under realistic performance constraints.  相似文献   
999.
Although widely used, terms associated with consumption of alcohol--such as "light," "moderate," and "heavy"--are unstandardized. Physicians conveying health messages using these terms therefore may impart confusing information to their patients or to other physicians. As an initial attempt to assess if informal standardization exists for these terms, the present study surveyed physicians for their definitions of such terms. Physicians operationally defined "light" drinking as 1.2 drinks/day, "moderate" drinking as 2.2 drinks/day, and "heavy" drinking as 3.5 drinks/day. Abusive drinking was defined as 5.4 drinks/day. There was considerable agreement for these operational definitions, indicating there is indeed an informal consensus among physicians as to what they mean by these terms. Gender and age did not influence these definitions, but self-reported drinking on the part of physicians was a factor. We also asked physicians for their opinions regarding the effects of "light," "moderate," and "heavy" drinking on health in general and specifically on health-related implications for pregnant women, and whether they felt their patients shared these beliefs.  相似文献   
1000.

Recommender systems for requirements are typically built on the assumption that similar requirements can be used as proxies to retrieve similar software. When a stakeholder proposes a new requirement, natural language processing (NLP)-based similarity metrics can be exploited to retrieve existing requirements, and in turn, identify previously developed code. Several NLP approaches for similarity computation between requirements are available. However, there is little empirical evidence on their effectiveness for code retrieval. This study compares different NLP approaches, from lexical ones to semantic, deep-learning techniques, and correlates the similarity among requirements with the similarity of their associated software. The evaluation is conducted on real-world requirements from two industrial projects from a railway company. Specifically, the most similar pairs of requirements across two industrial projects are automatically identified using six language models. Then, the trace links between requirements and software are used to identify the software pairs associated with each requirements pair. The software similarity between pairs is then automatically computed with JPLag. Finally, the correlation between requirements similarity and software similarity is evaluated to see which language model shows the highest correlation and is thus more appropriate for code retrieval. In addition, we perform a focus group with members of the company to collect qualitative data. Results show a moderately positive correlation between requirements similarity and software similarity, with the pre-trained deep learning-based BERT language model with preprocessing outperforming the other models. Practitioners confirm that requirements similarity is generally regarded as a proxy for software similarity. However, they also highlight that additional aspect comes into play when deciding software reuse, e.g., domain/project knowledge, information coming from test cases, and trace links. Our work is among the first ones to explore the relationship between requirements and software similarity from a quantitative and qualitative standpoint. This can be useful not only in recommender systems but also in other requirements engineering tasks in which similarity computation is relevant, such as tracing and change impact analysis.

  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号