全文获取类型
收费全文 | 100369篇 |
免费 | 12470篇 |
国内免费 | 7304篇 |
学科分类
工业技术 | 120143篇 |
出版年
2024年 | 485篇 |
2023年 | 1436篇 |
2022年 | 2894篇 |
2021年 | 3164篇 |
2020年 | 3487篇 |
2019年 | 2654篇 |
2018年 | 2703篇 |
2017年 | 3507篇 |
2016年 | 4230篇 |
2015年 | 4575篇 |
2014年 | 6707篇 |
2013年 | 6641篇 |
2012年 | 8297篇 |
2011年 | 8781篇 |
2010年 | 6334篇 |
2009年 | 6404篇 |
2008年 | 6013篇 |
2007年 | 7437篇 |
2006年 | 6429篇 |
2005年 | 5229篇 |
2004年 | 4428篇 |
2003年 | 3464篇 |
2002年 | 2746篇 |
2001年 | 2332篇 |
2000年 | 1864篇 |
1999年 | 1504篇 |
1998年 | 1213篇 |
1997年 | 990篇 |
1996年 | 893篇 |
1995年 | 701篇 |
1994年 | 612篇 |
1993年 | 411篇 |
1992年 | 335篇 |
1991年 | 265篇 |
1990年 | 225篇 |
1989年 | 175篇 |
1988年 | 111篇 |
1987年 | 74篇 |
1986年 | 82篇 |
1985年 | 35篇 |
1984年 | 43篇 |
1983年 | 32篇 |
1982年 | 35篇 |
1981年 | 18篇 |
1980年 | 22篇 |
1979年 | 24篇 |
1978年 | 12篇 |
1977年 | 12篇 |
1959年 | 24篇 |
1951年 | 29篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
在无人机空速测量系统设计中,要经过系统功能划分,软硬件设计,电路制版,整体联调等步骤,结果是否符合设计要求通常需要软硬件设计完成调试通过后才能得知。而整个设计牵涉环节较多,很少一次达到目的。Proteus软件是一个完全脱离硬件平台进行系统虚拟开发的工具,结合Keil C51软件可以方便的完成整个系统的设计,仿真中可以随时修改设计、调整软硬件分工以提高系统性能降低系统误差。使用仿真软件对设计方案的各个环节进行验证和优化,测试系统性能,可及时发现系统瓶颈,优化软硬件设计,提高效率,缩短开发周期。 相似文献
992.
针对医用充气式保温毯的温度控制要求,提出其温度控制闭环系统的结构,系统的硬件电路主要由温度传感器、单片机及RS232通讯模块等组成。采用PWM方法控制晶闸管导通时间的方式来改变加热丝的加热时间、采用自适应模糊PID控制方法实现保温毯气囊出气温度的自动调节与控制。将整个控制过程分为两个阶段:自动调节初始阶段,加热丝加热的占空比为100%;当温度达到一定值后转入自适应模糊PID控制阶段,由相应控制量来决定加热丝加热的占空比,实现气体温度的调节。通过MATLAB仿真分析表明,所采用的自适应模糊PID控制方法优于常规的PID控制,具有良好的适应性和鲁棒性,可明显提高系统的稳态精度。 相似文献
993.
Jake Cobb Author Vitae Author Vitae 《Journal of Systems and Software》2008,81(9):1539-1558
Web proxy caches are used to reduce the strain of contemporary web traffic on web servers and network bandwidth providers. In this research, a novel approach to web proxy cache replacement which utilizes neural networks for replacement decisions is developed and analyzed. Neural networks are trained to classify cacheable objects from real world data sets using information known to be important in web proxy caching, such as frequency and recency. Correct classification ratios between 0.85 and 0.88 are obtained both for data used for training and data not used for training. Our approach is compared with Least Recently Used (LRU), Least Frequently Used (LFU) and the optimal case which always rates an object with the number of future requests. Performance is evaluated in simulation for various neural network structures and cache conditions. The final neural networks achieve hit rates that are 86.60% of the optimal in the worst case and 100% of the optimal in the best case. Byte-hit rates are 93.36% of the optimal in the worst case and 99.92% of the optimal in the best case. We examine the input-to-output mappings of individual neural networks and analyze the resulting caching strategy with respect to specific cache conditions. 相似文献
994.
The influence of the size of the de-jitter buffer of a receiving router and of the network load on the transmission quality of voice traffic is investigated by means of simulation modeling. The optimal size of the de-jitter buffer that supports an acceptable level of loss of voice packets and a propagation time of the voice signal that corresponds to accepted standards is determined. 相似文献
995.
A general topology-based framework for adaptive insertion of cohesive elements in finite element meshes 总被引:1,自引:1,他引:0
Glaucio?H.?PaulinoEmail author Waldemar?Celes Rodrigo?Espinha Zhengyu??Zhang 《Engineering with Computers》2008,24(1):59-78
Large-scale simulation of separation phenomena in solids such as fracture, branching, and fragmentation requires a scalable
data structure representation of the evolving model. Modeling of such phenomena can be successfully accomplished by means
of cohesive models of fracture, which are versatile and effective tools for computational analysis. A common approach to insert
cohesive elements in finite element meshes consists of adding discrete special interfaces (cohesive elements) between bulk
elements. The insertion of cohesive elements along bulk element interfaces for fragmentation simulation imposes changes in
the topology of the mesh. This paper presents a unified topology-based framework for supporting adaptive fragmentation simulations,
being able to handle two- and three-dimensional models, with finite elements of any order. We represent the finite element
model using a compact and “complete” topological data structure, which is capable of retrieving all adjacency relationships
needed for the simulation. Moreover, we introduce a new topology-based algorithm that systematically classifies fractured
facets (i.e., facets along which fracture has occurred). The algorithm follows a set of procedures that consistently perform
all the topological changes needed to update the model. The proposed topology-based framework is general and ensures that
the model representation remains always valid during fragmentation, even when very complex crack patterns are involved. The
framework correctness and efficiency are illustrated by arbitrary insertion of cohesive elements in various finite element
meshes of self-similar geometries, including both two- and three-dimensional models. These computational tests clearly show
linear scaling in time, which is a key feature of the present data-structure representation. The effectiveness of the proposed
approach is also demonstrated by dynamic fracture analysis through finite element simulations of actual engineering problems.
相似文献
Glaucio H. PaulinoEmail: |
996.
We present a formal approach to study the evolution of biological networks. We use the Beta Workbench and its BlenX language to model and simulate networks in connection with evolutionary algorithms. Mutations are done on the structure of BlenX programs and networks are selected at any generation by using a fitness function. The feasibility of the approach is illustrated with a simple example. 相似文献
997.
A residual-based moving block bootstrap procedure for testing the null hypothesis of linear cointegration versus cointegration with threshold effects is proposed. When the regressors and errors of the models are serially and contemporaneously correlated, our test compares favourably with the Sup LM test proposed by Gonzalo and Pitarakis. Indeed, shortcomings of the former motivated the development of our test. The small sample performance of the bootstrap test is investigated by Monte Carlo simulations, and the results show that the test performs better than the Sup LM test. 相似文献
998.
Yunfeng GuAuthor Vitae Azzedine BoukercheRegina B. AraujoAuthor Vitae 《Journal of Parallel and Distributed Computing》2008
Data distribution management (DDM) plays a key role in traffic control for large-scale distributed simulations. In recent years, several solutions have been devised to make DDM more efficient and adaptive to different traffic conditions. Examples of such systems include the region-based, fixed grid-based, and dynamic grid-based (DGB) schemes, as well as grid-filtered region-based and agent-based DDM schemes. However, less effort has been directed toward improving the processing performance of DDM techniques. This paper presents a novel DDM scheme called the adaptive dynamic grid-based (ADGB) scheme that optimizes DDM time through the analysis of matching performance. ADGB uses an advertising scheme in which information about the target cell involved in the process of matching subscribers to publishers is known in advance. An important concept known as the distribution rate (DR) is devised. The DR represents the relative processing load and communication load generated at each federate. The DR and the matching performance are used as part of the ADGB method to select, throughout the simulation, the devised advertisement scheme that achieves the maximum gain with acceptable network traffic overhead. If we assume the same worst case propagation delays, when the matching probability is high, the performance estimation of ADGB has shown that a maximum efficiency gain of 66% can be achieved over the DGB scheme. The novelty of the ADGB scheme is its focus on improving performance, an important (and often forgotten) goal of DDM strategies. 相似文献
999.
Behaviour based on decision matrices for a coordination between agents in a urban traffic simulation
René Mandiau Alexis Champion Jean-Michel Auberlet Stéphane Espié Christophe Kolski 《Applied Intelligence》2008,28(2):121-138
This paper describes a multi-agent coordination mechanism applied to intersection simulation situations. In a goal of urban
traffic simulation, we must consider the dynamic interactions between autonomous vehicles. The field of multi-agent systems
provides us some studies for such systems, in particular on the coordination mechanisms. Conflicts between vehicles (i.e.
agents) are very frequent in such applications, and they may cause deadlocks, particularly at intersections such as crossroads.
Our approach is based on the solving of two player games/decision matrices which characterize three basic situations. An aggregation
method generalizes to n-player games for complex crossroads. The objective of this approach consists in searching basic two-player
matrices for solving n-agent problems. To explain the principle, we describe our approach for a particular case of crossroad
with three agents. Finally, the obtained results have been examined via a tool of road traffic simulation, ARCHISIM. We assume
also that the global traffic replicates the behavior of agents in different situations. 相似文献
1000.
Ian Streeter Gregory G. Lidong Richard G. Compton 《Sensors and actuators. B, Chemical》2008,133(2):462-466
Cyclic voltammetry is recorded of the oxidation of ferrocyanide on a glassy carbon electrode modified with multiple layers of single-walled carbon nanotubes. The current response is interpreted in terms of semi-infinite planar diffusion towards the macro-electrode surface and in terms of oxidation of the electroactive species trapped in pockets in between the nanotubes. A thin layer model is used to illustrate the effects of diffusion within a porous layer. It is found that a semi-infinite planar diffusion model alone is not appropriate for interpreting the kinetics of the electron transfer at this electrode surface. In particular, caution should be exercised in respect of comparing voltammetric peak-to-peak potential separations between naked electrodes and nanotube-modified electrodes for the inference of electrocatalysis via electron transfer via the nanotubes. 相似文献