全文获取类型
收费全文 | 2650篇 |
免费 | 144篇 |
国内免费 | 2篇 |
学科分类
工业技术 | 2796篇 |
出版年
2023年 | 27篇 |
2022年 | 33篇 |
2021年 | 110篇 |
2020年 | 63篇 |
2019年 | 54篇 |
2018年 | 63篇 |
2017年 | 74篇 |
2016年 | 91篇 |
2015年 | 55篇 |
2014年 | 86篇 |
2013年 | 163篇 |
2012年 | 133篇 |
2011年 | 170篇 |
2010年 | 115篇 |
2009年 | 133篇 |
2008年 | 118篇 |
2007年 | 91篇 |
2006年 | 86篇 |
2005年 | 74篇 |
2004年 | 53篇 |
2003年 | 65篇 |
2002年 | 52篇 |
2001年 | 36篇 |
2000年 | 32篇 |
1999年 | 47篇 |
1998年 | 196篇 |
1997年 | 116篇 |
1996年 | 97篇 |
1995年 | 44篇 |
1994年 | 52篇 |
1993年 | 46篇 |
1992年 | 22篇 |
1991年 | 14篇 |
1990年 | 9篇 |
1989年 | 22篇 |
1988年 | 11篇 |
1987年 | 9篇 |
1986年 | 17篇 |
1985年 | 17篇 |
1984年 | 8篇 |
1983年 | 10篇 |
1982年 | 8篇 |
1981年 | 12篇 |
1980年 | 4篇 |
1979年 | 4篇 |
1977年 | 8篇 |
1976年 | 15篇 |
1974年 | 3篇 |
1969年 | 3篇 |
1967年 | 7篇 |
排序方式: 共有2796条查询结果,搜索用时 31 毫秒
61.
62.
63.
Felisari L Grillo V Jabeen F Rubini S Menozzi C Rossi F Martelli F 《Ultramicroscopy》2011,111(8):1018-1028
A dedicated specimen holder has been designed to perform low-voltage scanning transmission electron microscopy in dark field mode. Different test samples, namely InGaAs/GaAs quantum wells, InGaAs nanowires and thick InGaAs layers, have been analysed to test the reliability of the model based on the proportionality to the specimen mass-thickness, generally used for image intensity interpretation of scattering contrast processes. We found that size of the probe, absorption and channelling must be taken into account to give a quantitative interpretation of image intensity. We develop a simple procedure to evaluate the probe-size effect and to obtain a quantitative indication of the absorption coefficient. Possible artefacts induced by channelling are pointed out. With the developed procedure, the low voltage approach can be successfully applied for quantitative compositional analysis. The method is then applied to the estimation of the In content in the core of InGaAs/GaAs core-shell nanowires. 相似文献
64.
Nicholas Mattei Maria Silvia Pini Francesca Rossi K. Brent Venable 《Annals of Mathematics and Artificial Intelligence》2013,68(1-3):135-160
We investigate the computational complexity of finding optimal bribery schemes in voting domains where the candidate set is the Cartesian product of a set of variables and voters use CP-nets, an expressive and compact way to represent preferences. To do this, we generalize the traditional bribery problem to take into account several issues over which agents vote, and their inter-dependencies. We consider five voting rules, three kinds of bribery actions, and five cost schemes. For most of the combinations of these parameters, we find that bribery in this setting is computationally easy. 相似文献
65.
Fabio Poiesi Riccardo Mazzon Andrea Cavallaro 《Computer Vision and Image Understanding》2013,117(10):1257-1272
We propose a generic online multi-target track-before-detect (MT-TBD) that is applicable on confidence maps used as observations. The proposed tracker is based on particle filtering and automatically initializes tracks. The main novelty is the inclusion of the target ID in the particle state, enabling the algorithm to deal with unknown and large number of targets. To overcome the problem of mixing IDs of targets close to each other, we propose a probabilistic model of target birth and death based on a Markov Random Field (MRF) applied to the particle IDs. Each particle ID is managed using the information carried by neighboring particles. The assignment of the IDs to the targets is performed using Mean-Shift clustering and supported by a Gaussian Mixture Model. We also show that the computational complexity of MT-TBD is proportional only to the number of particles. To compare our method with recent state-of-the-art works, we include a postprocessing stage suited for multi-person tracking. We validate the method on real-world and crowded scenarios, and demonstrate its robustness in scenes presenting different perspective views and targets very close to each other. 相似文献
66.
Jeanbourquin D Sage D Nguyen L Schaeli B Kayal S Barry DA Rossi L 《Water science and technology》2011,64(5):1108-1114
Discharges of combined sewer overflows (CSOs) and stormwater are recognized as an important source of environmental contamination. However, the harsh sewer environment and particular hydraulic conditions during rain events reduce the reliability of traditional flow measurement probes. An in situ system for sewer water flow monitoring based on video images was evaluated. Algorithms to determine water velocities were developed based on image-processing techniques. The image-based water velocity algorithm identifies surface features and measures their positions with respect to real world coordinates. A web-based user interface and a three-tier system architecture enable remote configuration of the cameras and the image-processing algorithms in order to calculate automatically flow velocity on-line. Results of investigations conducted in a CSO are presented. The system was found to measure reliably water velocities, thereby providing the means to understand particular hydraulic behaviors. 相似文献
67.
Rossi L Rumley L Ort C Minkkinen P Barry DA Chèvre N 《Water science and technology》2011,63(12):2975-2982
Sampling is a key step in the analysis of chemical compounds. It is particularly important in the environmental field, for example for wastewater effluents, wet-weather discharges or streams in which the flows and concentrations vary greatly over time. In contrast to the improvements that have occurred in analytical measurement, developments in the field of sampling are less active. However, sampling errors may exceed by an order of magnitude those related to analytical processes. We proposed an Internet-based application based on a sampling theory to identify and quantify the errors in the process of taking samples. This general theory of sampling, already applied to different areas, helps to answer questions related to the number of samples, their volume, their representativeness, etc. The use of the internet to host this application facilitates use of theoretical tools and raise awareness of the uncertainties related to sampling. An example is presented, which highlights the importance of the sampling step in the quality of analytical results. 相似文献
68.
Rodrigo Segura Christian Cierpka Massimiliano Rossi Sonja Joseph Heike Bunjes Christian J. Kähler 《Microfluidics and nanofluidics》2013,14(3-4):445-456
The ever accelerating state of technology has powered an increasing interest in heat transfer solutions and process engineering innovations in the microfluidics domain. In order to carry out such developments, reliable heat transfer diagnostic techniques are necessary. Thermo-liquid crystal (TLC) thermography, in combination with particle image velocimetry, has been a widely accepted and commonly used technique for the simultaneous measurement and characterization of temperature and velocity fields in macroscopic fluid flows for several decades. However, low seeding density, volume illumination, and low TLC particle image quality at high magnifications present unsurpassed challenges to its application to three-dimensional flows with microscopic dimensions. In this work, a measurement technique to evaluate the color response of individual non-encapsulated TLC particles is presented. A Shirasu porous glass membrane emulsification approach was used to produce the non-encapsulated TLC particles with a narrow size distribution and a multi-variable calibration procedure, making use of all three RGB and HSI color components, as well as the proper orthogonally decomposed RGB components, was used to achieve unprecedented low uncertainty levels in the temperature estimation of individual particles, opening the door to simultaneous temperature and velocity tracking using 3D velocimetry techniques. 相似文献
69.
Francesco Bellotti Riccardo Berta Massimiliano Margarone Alessandro De Gloria 《Software》2008,38(12):1241-1259
The RFID technology is becoming ever more popular in the development of ubiquitous computing applications. A full exploitation of the RFID potential requires the study and implementation of human–computer interaction (HCI) modalities to be able to support wide usability by the target audience. This implies the need for programming methodologies specifically dedicated to support the easy and efficient prototyping of applications to have feedback from early tests with users. On the basis of our field‐working experience, we have designed oDect, a high‐level language and platform‐independent application programming interface (API), ad hoc designed to meet the needs of typical applications for mobile devices (smart phones and PDAs). oDect aims at allowing application developers to create their prototypes focusing on the needs of the final users, without having to care about the low‐level software that interacts with the RFID hardware. Further, in an end‐user developing (EUD) approach, oDect provides specific support for the application end‐user herself to cope with typical problems of RFID applications in detecting objects. We describe in detail the features of the API and discuss the findings of a test with four programmers, where we analyse and evaluate the use of the API in four sample applications. We also present results of an end‐user test, which investigated strengths and weaknesses of the territorial agenda (TA) concept. The TA is an RFID‐based citizen guide that aids—through time‐ and location‐based reminders—users in their daily activities in a city. The TA directly exploits EUD features of oDect, in particular concerning the possibility of linking detected objects with custom actions. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
70.
Boosting text segmentation via progressive classification 总被引:5,自引:4,他引:1
Eugenio Cesario Francesco Folino Antonio Locane Giuseppe Manco Riccardo Ortale 《Knowledge and Information Systems》2008,15(3):285-320
A novel approach for reconciling tuples stored as free text into an existing attribute schema is proposed. The basic idea
is to subject the available text to progressive classification, i.e., a multi-stage classification scheme where, at each intermediate stage, a classifier is learnt that analyzes the textual
fragments not reconciled at the end of the previous steps. Classification is accomplished by an ad hoc exploitation of traditional
association mining algorithms, and is supported by a data transformation scheme which takes advantage of domain-specific dictionaries/ontologies.
A key feature is the capability of progressively enriching the available ontology with the results of the previous stages
of classification, thus significantly improving the overall classification accuracy. An extensive experimental evaluation
shows the effectiveness of our approach. 相似文献