首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   952篇
  免费   63篇
工业技术   1015篇
  2023年   9篇
  2022年   16篇
  2021年   79篇
  2020年   31篇
  2019年   36篇
  2018年   42篇
  2017年   32篇
  2016年   34篇
  2015年   46篇
  2014年   50篇
  2013年   73篇
  2012年   65篇
  2011年   64篇
  2010年   61篇
  2009年   56篇
  2008年   33篇
  2007年   45篇
  2006年   37篇
  2005年   19篇
  2004年   21篇
  2003年   15篇
  2002年   17篇
  2001年   3篇
  2000年   8篇
  1999年   5篇
  1998年   8篇
  1997年   12篇
  1996年   7篇
  1995年   7篇
  1994年   6篇
  1993年   10篇
  1992年   6篇
  1991年   3篇
  1990年   3篇
  1989年   4篇
  1988年   4篇
  1987年   5篇
  1986年   5篇
  1985年   3篇
  1984年   4篇
  1983年   4篇
  1981年   9篇
  1980年   4篇
  1979年   2篇
  1978年   2篇
  1976年   4篇
  1975年   1篇
  1974年   1篇
  1969年   1篇
  1966年   1篇
排序方式: 共有1015条查询结果,搜索用时 31 毫秒
21.
Numerical weather prediction (NWP) is in a period of transition. As resolutions increase, global models are moving towards fully nonhydrostatic dynamical cores, with the local and global models using the same governing equations; therefore we have reached a point where it will be necessary to use a single model for both applications. The new dynamical cores at the heart of these unified models are designed to scale efficiently on clusters with hundreds of thousands or even millions of CPU cores and GPUs. Operational and research NWP codes currently use a wide range of numerical methods: finite differences, spectral transform, finite volumes and, increasingly, finite/spectral elements and discontinuous Galerkin, which constitute element-based Galerkin (EBG) methods. Due to their important role in this transition, will EBGs be the dominant power behind NWP in the next 10 years, or will they just be one of many methods to choose from? One decade after the review of numerical methods for atmospheric modeling by Steppeler et al. (Meteorol Atmos Phys 82:287–301, 2003), this review discusses EBG methods as a viable numerical approach for the next-generation NWP models. One well-known weakness of EBG methods is the generation of unphysical oscillations in advection-dominated flows; special attention is hence devoted to dissipation-based stabilization methods. Since EBGs are geometrically flexible and allow both conforming and non-conforming meshes, as well as grid adaptivity, this review is concluded with a short overview of how mesh generation and dynamic mesh refinement are becoming as important for atmospheric modeling as they have been for engineering applications for many years.  相似文献   
22.
Image processing represents a research field in which high-quality solutions have been obtained using various soft computing techniques. Evolutionary algorithms constitute a class of stochastic search methods that are applicable in both optimization and design tasks. In the area of circuit design Cartesian Genetic Programming has often been utilized in combination with an algorithm of Evolutionary Strategy. Digital image filters represent a specific class of circuits whose design can be performed by means of this approach. Switching filters are advanced non-linear filtering techniques in which the main idea is to detect and filter the noise pixels while keeping the uncorrupted pixels unchanged in order to increase the quality of the resulting image. The aim of this article is to present a robust design technique based on Cartesian Genetic Programming for the automatic synthesis of switching image filters intended for real-time processing applications. The robustness of the proposed evolutionary approach is evaluated using four design problems including the removal of salt and pepper noise, random shot noise, impulse burst noise and impulse burst noise combined with random shot noise. An extensive evaluation is performed in order to compare the properties of the evolved switching filters with the best conventional solutions. The evaluation has shown that the evolved switching filters exhibit a very good trade off between the quality of filtering and the implementation cost in field programmable gate arrays.  相似文献   
23.
24.
When calculating aberration coefficients of secondary and higher order, there is a danger of misinterpreting the result. An example is given for a homogenous magnetic field and the source of the difficulty is described.  相似文献   
25.
Nickel-iron layered double hydroxide (NiFe-LDH) nanosheets have shown optimal oxygen evolution reaction (OER) performance; however, the role of the intercalated ions in the OER activity remains unclear. In this work, we show that the activity of the NiFe-LDHs can be tailored by the intercalated anions with different redox potentials. The intercalation of anions with low redox potential (high reducing ability), such as hypophosphites, leads to NiFe-LDHs with low OER overpotential of 240 mV and a small Tafel slope of 36.9 mV/dec, whereas NiFe-LDHs intercalated with anions of high redox potential (low reducing ability), such as fluorion, show a high overpotential of 370 mV and a Tafel slope of 80.8 mV/dec. The OER activity shows a surprising linear correlation with the standard redox potential. Density functional theory calculations and X-ray photoelectron spectroscopy analysis indicate that the intercalated anions alter the electronic structure of metal atoms which exposed at the surface. Anions with low standard redox potential and strong reducing ability transfer more electrons to the hydroxide layers. This increases the electron density of the surface metal sites and stabilizes their high-valence states, whose formation is known as the critical step prior to the OER process.
  相似文献   
26.
Abstract: The paper presents a novel machine learning algorithm used for training a compound classifier system that consists of a set of area classifiers. Area classifiers recognize objects derived from the respective competence area. Splitting feature space into areas and selecting area classifiers are two key processes of the algorithm; both take place simultaneously in the course of an optimization process aimed at maximizing the system performance. An evolutionary algorithm is used to find the optimal solution. A number of experiments have been carried out to evaluate system performance. The results prove that the proposed method outperforms each elementary classifier as well as simple voting.  相似文献   
27.
We present a method to speed up the dynamic program algorithms used for solving the HMM decoding and training problems for discrete time-independent HMMs. We discuss the application of our method to Viterbi’s decoding and training algorithms (IEEE Trans. Inform. Theory IT-13:260–269, 1967), as well as to the forward-backward and Baum-Welch (Inequalities 3:1–8, 1972) algorithms. Our approach is based on identifying repeated substrings in the observed input sequence. Initially, we show how to exploit repetitions of all sufficiently small substrings (this is similar to the Four Russians method). Then, we describe four algorithms based alternatively on run length encoding (RLE), Lempel-Ziv (LZ78) parsing, grammar-based compression (SLP), and byte pair encoding (BPE). Compared to Viterbi’s algorithm, we achieve speedups of Θ(log n) using the Four Russians method, using RLE, using LZ78, using SLP, and Ω(r) using BPE, where k is the number of hidden states, n is the length of the observed sequence and r is its compression ratio (under each compression scheme). Our experimental results demonstrate that our new algorithms are indeed faster in practice. We also discuss a parallel implementation of our algorithms. A preliminary version of this paper appeared in Proc. 18th Annual Symposium on Combinatorial Pattern Matching (CPM), pp. 4–15, 2007. Y. Lifshits’ research was supported by the Center for the Mathematics of Information and the Lee Center for Advanced Networking. S. Mozes’ work conducted while visiting MIT.  相似文献   
28.
The present paper deals with the problem of solving the (\(n^2 - 1\))-puzzle and cooperative path-finding (CPF) problems sub-optimally by rule-based algorithms. To solve the puzzle, we need to rearrange \(n^2 - 1\) pebbles in the \(n \times n\)-sized square grid using one vacant position to achieve the goal configuration. An improvement to the existing polynomial-time algorithm is proposed and experimentally analyzed. The improved algorithm represents an attempt to move pebbles in a more efficient way compared to the original algorithm by grouping them into so-called snakes and moving them together as part of a snake formation. An experimental evaluation has shown that the snakeenhanced algorithm produces solutions which are 8–9 % shorter than the solutions generated by the original algorithm. Snake-like movement has also been integrated into the rule-based algorithms used in solving CPF problems sub-optimally, which is a closely related task. The task in CPF consists in moving a group of abstract robots on an undirected graph to specific vertices. The robots can move to unoccupied neighboring vertices; no more than one robot can be placed in each vertex. The (\(n^2 - 1\))-puzzle is a special case of CPF where the underlying graph is a 4-connected grid and only one vertex is vacant. Two major rule-based algorithms for solving CPF problems were included in our study—BIBOX and PUSH-and-SWAP (PUSH-and-ROTATE). The use of snakes in the BIBOX algorithm led to consistent efficiency gains of around 30 % for the (\(n^2 - 1\))-puzzle and up to 50 % in for CPF problems on biconnected graphs with various ear decompositions and multiple vacant vertices. For the PUSH-and-SWAP algorithm, the efficiency gain achieved from the use of snakes was around 5–8 %. However, the efficiency gain was unstable and hardly predictable for PUSH-and-SWAP.  相似文献   
29.
We propose a novel algorithm, called REGGAE, for the generation of momenta of a given sample of particle masses, evenly distributed in Lorentz-invariant phase space and obeying energy and momentum conservation. In comparison to other existing algorithms, REGGAE is designed for the use in multiparticle production in hadronic and nuclear collisions where many hadrons are produced and a large part of the available energy is stored in the form of their masses. The algorithm uses a loop simulating multiple collisions which lead to production of configurations with reasonably large weights.

Program summary

Program title: REGGAE (REscattering-after-Genbod GenerAtor of Events)Catalogue identifier: AEJR_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJR_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 1523No. of bytes in distributed program, including test data, etc.: 9608Distribution format: tar.gzProgramming language: C++Computer: PC Pentium 4, though no particular tuning for this machine was performed.Operating system: Originally designed on Linux PC with g++, but it has been compiled and ran successfully on OS X with g++ and MS Windows with Microsoft Visual C++ 2008 Express Edition, as well.RAM: This depends on the number of particles which are generated. For 10 particles like in the attached example it requires about 120 kB.Classification: 11.2Nature of problem: The task is to generate momenta of a sample of particles with given masses which obey energy and momentum conservation. Generated samples should be evenly distributed in the available Lorentz-invariant phase space.Solution method: In general, the algorithm works in two steps. First, all momenta are generated with the GENBOD algorithm. There, particle production is modeled as a sequence of two-body decays of heavy resonances. After all momenta are generated this way, they are reshuffled. Each particle undergoes a collision with some other partner such that in the pair center of mass system the new directions of momenta are distributed isotropically. After each particle collides only a few times, the momenta are distributed evenly across the whole available phase space. Starting with GENBOD is not essential for the procedure but it improves the performance.Running time: This depends on the number of particles and number of events one wants to generate. On a LINUX PC with 2 GHz processor, generation of 1000 events with 10 particles each takes about 3 s.  相似文献   
30.
This paper shows how to improve holistic face analysis by assigning importance factors to different facial regions (termed as face relevance maps). We propose a novel supervised learning algorithm for generating face relevance maps to improve the discriminating capability of existing methods. We have successfully applied the developed technique to face identification based on the Eigenfaces and Fisherfaces methods, and also to gender classification based on principal geodesic analysis (PGA). We demonstrate how to iteratively learn the face relevance map using labelled data. Experimental results confirm the effectiveness of the developed approach.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号