首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5828篇
  免费   321篇
  国内免费   5篇
工业技术   6154篇
  2024年   11篇
  2023年   102篇
  2022年   140篇
  2021年   229篇
  2020年   150篇
  2019年   128篇
  2018年   190篇
  2017年   165篇
  2016年   249篇
  2015年   242篇
  2014年   301篇
  2013年   401篇
  2012年   379篇
  2011年   462篇
  2010年   356篇
  2009年   340篇
  2008年   341篇
  2007年   317篇
  2006年   240篇
  2005年   211篇
  2004年   159篇
  2003年   149篇
  2002年   126篇
  2001年   81篇
  2000年   79篇
  1999年   66篇
  1998年   76篇
  1997年   49篇
  1996年   49篇
  1995年   57篇
  1994年   34篇
  1993年   33篇
  1992年   29篇
  1991年   20篇
  1990年   18篇
  1989年   18篇
  1988年   15篇
  1987年   13篇
  1986年   8篇
  1985年   5篇
  1984年   24篇
  1983年   9篇
  1982年   9篇
  1981年   7篇
  1980年   7篇
  1979年   5篇
  1978年   6篇
  1976年   9篇
  1975年   6篇
  1974年   5篇
排序方式: 共有6154条查询结果,搜索用时 15 毫秒
81.
We argue that expert finding is sensitive to multiple document features in an organizational intranet. These document features include multiple levels of associations between experts and a query topic from sentence, paragraph, up to document levels, document authority information such as the PageRank, indegree, and URL length of documents, and internal document structures that indicate the experts’ relationship with the content of documents. Our assumption is that expert finding can largely benefit from the incorporation of these document features. However, existing language modeling approaches for expert finding have not sufficiently taken into account these document features. We propose a novel language modeling approach, which integrates multiple document features, for expert finding. Our experiments on two large scale TREC Enterprise Track datasets, i.e., the W3C and CSIRO datasets, demonstrate that the natures of the two organizational intranets and two types of expert finding tasks, i.e., key contact finding for CSIRO and knowledgeable person finding for W3C, influence the effectiveness of different document features. Our work provides insights into which document features work for certain types of expert finding tasks, and helps design expert finding strategies that are effective for different scenarios. Our main contribution is to develop an effective formal method for modeling multiple document features in expert finding, and conduct a systematic investigation of their effects. It is worth noting that our novel approach achieves better results in terms of MAP than previous language model based approaches and the best automatic runs in both the TREC2006 and TREC2007 expert search tasks, respectively.  相似文献   
82.
83.
ContextSoftware quality is a complex concept. Therefore, assessing and predicting it is still challenging in practice as well as in research. Activity-based quality models break down this complex concept into concrete definitions, more precisely facts about the system, process, and environment as well as their impact on activities performed on and with the system. However, these models lack an operationalisation that would allow them to be used in assessment and prediction of quality. Bayesian networks have been shown to be a viable means for this task incorporating variables with uncertainty.ObjectiveThe qualitative knowledge contained in activity-based quality models are an abundant basis for building Bayesian networks for quality assessment. This paper describes a four-step approach for deriving systematically a Bayesian network from an assessment goal and a quality model.MethodThe four steps of the approach are explained in detail and with running examples. Furthermore, an initial evaluation is performed, in which data from NASA projects and an open source system is obtained. The approach is applied to this data and its applicability is analysed.ResultsThe approach is applicable to the data from the NASA projects and the open source system. However, the predictive results vary depending on the availability and quality of the data, especially the underlying general distributions.ConclusionThe approach is viable in a realistic context but needs further investigation in case studies in order to analyse its predictive validity.  相似文献   
84.
Absorption-based opto-chemical sensors for oxygen are presented that consist of leuco dyes (leuco indigo and leuco thioindigo) incorporated into two kinds of polymer matrices. An irreversible and visible color change (to red or blue) is caused by a chromogenic chemistry involving the oxidation of the (virtually colorless) leuco dyes by molecular oxygen. The moderately gas permeable copolymer poly(styrene-co-acrylonitrile) and a highly oxygen-permeable polyurethane hydrogel, respectively, are used in order to increase the effective dynamic range for visualizing and detecting oxygen. We describe the preparation and properties of four different types of such oxygen sensors that are obtained by dip-coating a gas impermeable foil made from poly(ethylene terephthalate) with a sensor layer composed of leuco dye and polymer.  相似文献   
85.
On the purpose of Event-B proof obligations   总被引:2,自引:2,他引:0  
Event-B is a formal modelling method which is claimed to be suitable for diverse modelling domains, such as reactive systems and sequential program development. This claim hinges on the fact that any particular model has an appropriate semantics. In Event-B, this semantics is provided implicitly by proof obligations associated with a model. There is no fixed semantics though. In this article we argue that this approach is beneficial to modelling because we can use similar proof obligations across a variety of modelling domains. By way of two examples we show how similar proof obligations are linked to different semantics. A small set of proof obligations is thus suitable for a whole range of modelling problems in diverse modelling domains.  相似文献   
86.
An asymmetric multivariate generalization of the recently proposed class of normal mixture GARCH models is developed. Issues of parametrization and estimation are discussed. Conditions for covariance stationarity and the existence of the fourth moment are derived, and expressions for the dynamic correlation structure of the process are provided. In an application to stock market returns, it is shown that the disaggregation of the conditional (co)variance process generated by the model provides substantial intuition. Moreover, the model exhibits a strong performance in calculating out-of-sample Value-at-Risk measures.  相似文献   
87.
We have to deal with different data formats whenever data formats evolve or data must be integrated from heterogeneous systems. These data when implemented in XML for data exchange cannot be shared freely among applications without data transformation. A common approach to solve this problem is to convert the entire XML data from their source format to the applications’ target formats using the transformations rules specified in XSLT stylesheets. However, in many cases, not all XML data are required to be transformed except for a smaller part described by a user’s query (application). In this paper, we present an approach that optimizes the execution time of an XSLT stylesheet for answering a given XPath query by modifying the XSLT stylesheet in such a way that it would (a) capture only the parts in the XML data that are relevant to the query and (b) process only those XSLT instructions that are relevant to the query. We prove the correctness of our optimization approach, analyze its complexity and present experimental results. The experimental results show that our approach performs the best in terms of execution time, especially when many cost-intensive XSLT instructions can be excluded in the XSLT stylesheet.  相似文献   
88.
89.
This paper presents a new algorithm for the dynamic multi-level capacitated lot sizing problem with setup carry-overs (MLCLSP-L). The MLCLSP-L is a big-bucket model that allows the production of any number of products within a period, but it incorporates partial sequencing of the production orders in the sense that the first and the last products produced in a period are determined by the model. We solve a model which is applicable to general bill-of-material structures and which includes minimum lead times of one period and multi-period setup carry-overs. Our algorithm solves a series of mixed-integer linear programs in an iterative so-called fix-and-optimize approach. In each instance of these mixed-integer linear programs a large number of binary setup variables is fixed whereas only a small subset of these variables is optimized, together with the complete set of the inventory and lot size variables. A numerical study shows that the algorithm provides high-quality results and that the computational effort is moderate.  相似文献   
90.
Topography and accuracy of image geometric registration significantly affect the quality of satellite data, since pixels are displaced depending on surface elevation and viewing geometry. This effect should be corrected for through the process of accurate image navigation and orthorectification in order to meet the geolocation accuracy for systematic observations specified by the Global Climate Observing System (GCOS) requirements for satellite climate data records. We investigated the impact of orthorectification on the accuracy of maximum Normalized Difference Vegetation Index (NDVI) composite data for a mountain region in north-western Canada at various spatial resolutions (1 km, 4 km, 5 km, and 8 km). Data from AVHRR on board NOAA-11 (1989 and 1990) and NOAA-16 (2001, 2002, and 2003) processed using a system called CAPS (Canadian AVHRR Processing System) for the month of August were considered. Results demonstrate the significant impact of orthorectification on the quality of composite NDVI data in mountainous terrain. Differences between orthorectified and non-orthorectified NDVI composites (ΔNDVI) adopted both large positive and negative values, with the 1% and 99% percentiles of ΔNDVI at 1 km resolution spanning values between − 0.16 < ΔNDVI < 0.09. Differences were generally reduced to smaller numbers for coarser resolution data, but systematic positive biases for non-orthorectified composites were obtained at all spatial resolutions, ranging from 0.02 (1 km) to 0.004 (8 km). Analyzing the power spectra of maximum NDVI composites at 1 km resolution, large differences between orthorectified and non-orthorectified AVHRR data were identified at spatial scales between 4 km and 10 km. Validation of NOAA-16 AVHRR NDVI with MODIS NDVI composites revealed higher correlation coefficients (by up to 0.1) for orthorectified composites relative to the non-orthorectified case. Uncertainties due to the AVHRR Global Area Coverage (GAC) sampling scheme introduce an average positive bias of 0.02 ± 0.03 at maximum NDVI composite level that translates into an average relative bias of 10.6% ± 19.1 for sparsely vegetated mountain regions. This can at least partially explain the systematic average positive biases we observed relative to our results in AVHRR GAC-based composites from the Global Inventory Modeling and Mapping Studies (GIMMS) and Polar Pathfinder (PPF) datasets (0.19 and 0.05, respectively). With regard to the generation of AVHRR long-term climate data records, results suggest that orthorectification should be an integral part of AVHRR pre-processing, since neglecting the terrain displacement effect may lead to important biases and additional noise in time series at various spatial scales.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号