首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1779篇
  免费   116篇
  国内免费   1篇
工业技术   1896篇
  2024年   5篇
  2023年   19篇
  2022年   77篇
  2021年   106篇
  2020年   48篇
  2019年   92篇
  2018年   72篇
  2017年   70篇
  2016年   83篇
  2015年   68篇
  2014年   84篇
  2013年   126篇
  2012年   128篇
  2011年   142篇
  2010年   114篇
  2009年   106篇
  2008年   93篇
  2007年   72篇
  2006年   68篇
  2005年   47篇
  2004年   52篇
  2003年   29篇
  2002年   15篇
  2001年   17篇
  2000年   20篇
  1999年   10篇
  1998年   16篇
  1997年   25篇
  1996年   12篇
  1995年   9篇
  1994年   6篇
  1993年   9篇
  1992年   4篇
  1991年   7篇
  1990年   6篇
  1988年   5篇
  1986年   1篇
  1985年   7篇
  1984年   1篇
  1983年   5篇
  1982年   1篇
  1981年   1篇
  1980年   4篇
  1979年   2篇
  1978年   3篇
  1977年   1篇
  1976年   2篇
  1975年   1篇
  1974年   3篇
  1973年   1篇
排序方式: 共有1896条查询结果,搜索用时 15 毫秒
41.
In the theory of graph rewriting, the use of coalescing rules, i.e., of rules which besides deleting and generating graph items, can coalesce some parts of the graph, turns out to be quite useful for modelling purposes, but, at the same time, problematic for the development of a satisfactory partial order concurrent semantics for rewrites. Rewriting over graphs with equivalences, i.e., (typed hyper)-graphs equipped with an equivalence over nodes provides a technically convenient replacement of graph rewriting with coalescing rules, for which a truly concurrent semantics can be easily defined. The expressivity of such a formalism is tested in a setting where coalescing rules typically play a basic role: the encoding of calculi with name passing as graph rewriting systems. Specifically, we show how the (monadic fragment) of the solo calculus, one of the dialect of those calculi whose distinctive feature is name fusion, can be encoded as a rewriting system over graph with equivalences.  相似文献   
42.
We address the problem of specializing a constraint logic program w.r.t. a constrained atom which specifies the context of use of the program. We follow an approach based on transformation rules and strategies. We introduce a novel transformation rule, called contextual constraint replacement, to be combined with variants of the traditional unfolding and folding rules. We present a general Partial Evaluation Strategy for automating the application of these rules, and two additional strategies: the Context Propagation Strategy which is instrumental for the application of our contextual constraint replacement rule, and the Invariant Promotion Strategy for taking advantage of invariance properties of the computation. We show through some examples the power of our method and we compare it with existing methods for partial deduction of constraint logic programs based on extensions of Lloyd and Shepherdson's approach.  相似文献   
43.
In the biomechanical literature only a few studies are available focusing on the determination of joint loading within the lower extremities in snowboarding. These studies are limited to analysis in a restricted capture volume due to the use of optical video-based systems. To overcome this restriction the aim of the present study was to develop a method to determine net joint moments within the lower extremities in snowboarding for complete measurement runs. An experienced snowboarder performed several runs equipped with two custom-made force plates as well as a full-body inertial measurement system. A rigid, multi-segment model was developed to describe the motion and loads within the lower extremities. This model is based on an existing lower-body model and designed to be run by the OpenSim software package. Measured kinetic and kinematic data were imported into the OpenSim program and inverse dynamic calculations were performed. The results illustrate the potential of the developed method for the determination of joint loadings within the lower extremities for complete measurement runs in a real snowboarding environment. The calculated net joint moments of force are reasonable in comparison to the data presented in the literature. A good reliability of the method seems to be indicated by the low data variation between different turns. Due to the unknown accuracy of this method the application for inter-individual studies as well as studies of injury mechanisms may be limited. For intra-individual studies comparing different snowboarding techniques as well as different snowboard equipment the method seems to be beneficial. The validity of the method needs to be studied further.  相似文献   
44.
The problem of line breaking consists of finding the best way to split paragraphs into lines. It has been cleverly addressed by the total‐fit algorithm exposed by Knuth and Plass in a well‐known paper. Similarly, page‐breaking algorithms break the content flow of a document into page units. Formatting languages—such as the World Wide Web Consortium standard Extensible Stylesheet Language Formatting Objects (XSL‐FO)—allow users to set which content should be kept in the same page and how many isolated lines are acceptable at the beginning/end of each page. The strategies most formatters adopt to meet these requirements, however, are not satisfactory for many publishing contexts as they very often generate unpleasant empty areas. In that case, typographers are required to manually craft the results in order to completely fill pages. This paper presents a page‐breaking algorithm that extends the original Knuth and Plass line‐breaking approach and produces high‐quality documents without unwanted empty areas. The basic idea consists of delaying the definitive choice of breaks in the line‐breaking process in order to provide a larger set of alternatives to the actual pagination step. The algorithm also allows users to decide the set of properties to be adjusted for pagination and their variation ranges. An application of the algorithm to XSL‐FO is also presented, with an extension of the language that allows users to drive the pagination process. The tool, named FOP+, is a customized version of the open‐source Apache Formatting Objects Processor formatter. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
45.
We present an importance sampling method for the bidirectional scattering distribution function (bsdf) of hair. Our method is based on the multi‐lobe hair scattering model presented by Sadeghi et al. [ [SPJT10] ]. We reduce noise by drawing samples from a distribution that approximates the bsdf well. Our algorithm is efficient and easy to implement, since the sampling process requires only the evaluation of a few analytic functions, with no significant memory overhead or need for precomputation. We tested our method in a research raytracer and a production renderer based on micropolygon rasterization. We show significant improvements for rendering direct illumination using multiple importance sampling and for rendering indirect illumination using path tracing.  相似文献   
46.
This work presents a distributed method for control centers to monitor the operating condition of a power network, i.e., to estimate the network state, and to ultimately determine the occurrence of threatening situations. State estimation has been recognized to be a fundamental task for network control centers to operate safely and reliably a power grid. We consider (static) state estimation problems, in which the state vector consists of the voltage magnitude and angle at all network buses. We consider the state to be linearly related to network measurements, which include power flows, current injections, and voltage phasors at some buses. We admit the presence of several cooperating control centers, and we design two distributed methods for them to compute the minimum variance estimate of the state, given the network measurements. The two distributed methods rely on different modes of cooperation among control centers: in the first method an incremental mode of cooperation is used, whereas, in the second method, a diffusive interaction is implemented. Our procedures, which require each control center to know only the measurements and the structure of a subpart of the whole network, are computationally efficient and scalable with respect to the network dimension, provided that the number of control centers also increases with the network cardinality. Additionally, a finite-memory approximation of our diffusive algorithm is proposed, and its accuracy is characterized. Finally, our estimation methods are exploited to develop a distributed algorithm to detect corrupted network measurements.  相似文献   
47.
The Farwell and Donchin P300 speller interface is one of the most widely used brain-computer interface (BCI) paradigms for writing text. Recent studies have shown that the recognition accuracy of the P300 speller decreases significantly when eye movement is impaired. This report introduces the GeoSpell interface (Geometric Speller), which implements a stimulation framework for a P300-based BCI that has been optimised for operation in covert visual attention. We compared the Geospell with the P300 speller interface under overt attention conditions with regard to effectiveness, efficiency and user satisfaction. Ten healthy subjects participated in the study. The performance of the GeoSpell interface in covert attention was comparable with that of the P300 speller in overt attention. As expected, the effectiveness of the spelling decreased with the new interface in covert attention. The NASA task load index (TLX) for workload assessment did not differ significantly between the two modalities. PRACTITIONER SUMMARY: This study introduces and evaluates a gaze-independent, P300-based brain-computer interface, the efficacy and user satisfaction of which were comparable with those off the classical P300 speller. Despite a decrease in effectiveness due to the use of covert attention, the performance of the GeoSpell far exceeded the threshold of accuracy with regard to effective spelling.  相似文献   
48.
This work presents an electricity consumption-forecasting framework configured automatically and based on an Adaptative Neural Network Inference System (ANFIS). This framework is aimed to be implemented in industrial plants, such as automotive factories, with the objective of giving support to an Intelligent Energy Management System (IEMS). The forecasting purpose is to support the decision-making (i.e. scheduling workdays, on-off production lines, shift power loads to avoid load peaks, etc.) to optimize and improve economical, environmental and electrical key performance indicators. The base structure algorithm, the ANFIS algorithm, was configured by means of a Multi Objective Genetic Algorithm (MOGA), with the aim of getting an automatic-configuration system modelling. This system was implemented in an independent section of an automotive factory, which was selected for the high randomness of its main loads. The time resolution for forecasting was the quarter hour. Under these challenging conditions, the autonomous configuration, system learning and prognosis were tested with success.  相似文献   
49.
The paper describes a parallel implementation of a neural algorithm performing vector quantization for very low bit-rate video compression on toroidal-mesh multiprocessor systems. The neural model considered is a plastic version of the Neural Gas algorithm, whose features are suitable for implementations on toroidal mesh topologies. The architecture adopted, and the data-allocation strategy, enhance the method's scaling properties and remarkable efficiency. The parallel approach is supported by a theoretical analysis of the efficiency of the overall structure. Experimental results on a significant testbed and the fit between predicted and measured values confirm the validity of the parallel approach.  相似文献   
50.
Of all the warehouse activities, order picking is one of the most time-consuming and expensive. In order to improve the task, several researches have pointed out the need to consider jointly the layout of the warehouse, the storage assignment strategy and the routing policy to reduce travelled distances and picking time. This paper presents the storage assignment and travel distance estimation (SA&TDE) joint method, a new approach useful to design and evaluate a manual picker-to-parts picking system, focusing on goods allocation and distances estimation. Starting from a set of picking orders received in a certain time range, this approach allows to evaluate the combinations of product codes assigned to storage locations, aisles, sections or warehouse areas and to assess the most relevant ones, for the best location and warehouse layout, with the aim of ensuring optimal picking routes, through the application of the multinomial probability distribution. A case study is developed as well, in order to clarify the concept that underlies the SA&TDE joint method, and to show the validity and the flexibility of the approach, through the calculation of the saving at different levels of detail.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号