首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   42550篇
  免费   909篇
  国内免费   176篇
工业技术   43635篇
  2023年   188篇
  2022年   112篇
  2021年   71篇
  2019年   45篇
  2018年   446篇
  2017年   673篇
  2016年   1047篇
  2015年   777篇
  2014年   414篇
  2013年   398篇
  2012年   2128篇
  2011年   2424篇
  2010年   657篇
  2009年   745篇
  2008年   593篇
  2007年   615篇
  2006年   553篇
  2005年   3333篇
  2004年   2552篇
  2003年   2038篇
  2002年   838篇
  2001年   728篇
  2000年   271篇
  1999年   612篇
  1998年   6141篇
  1997年   3800篇
  1996年   2496篇
  1995年   1445篇
  1994年   1064篇
  1993年   1095篇
  1992年   242篇
  1991年   301篇
  1990年   302篇
  1989年   273篇
  1988年   290篇
  1987年   219篇
  1986年   197篇
  1985年   166篇
  1984年   69篇
  1983年   80篇
  1982年   127篇
  1981年   175篇
  1980年   191篇
  1979年   60篇
  1978年   97篇
  1977年   608篇
  1976年   1319篇
  1975年   98篇
  1973年   46篇
  1971年   49篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
The microstructure development during plastic deformation was reviewed for iron and steel which were subjected to cold rolling or mechanical milling (MM) treatment, and the change in strengthening mechanism caused by the severe plastic deformation (SPD) was also discussed in terms of ultra grain refinement behavior. The microstructure of cold-rolled iron is characterized by a typical dislocation cell structure, where the strength can be explained by dislocation strengthening. It was confirmed that the increase in dislocation density by cold working is limited at 1016m−2, which means the maximum hardness obtained by dislocation strengthening is HV3.7 GPa. However, the iron is abnormally work-hardened over the maximum dislocation strengthening by SPD of MM because of the ultra grain refinement caused by the SPD. In addition, impurity of carbon plays an important role in such grain refinement: the carbon addition leads to the formation of nano-crystallized structure in iron.  相似文献   
992.
A low energy N2 ? ion beam impinged on a α-Al2O3(0001) single crystal surface in the range of fluence 5×1015/cm2?1×1018/cm2 at room temperature. After ion bombardment, chemical bonding on the modified sapphire surface was investigated by x-ray photoelectron spectroscopy. Below a fluence of 1×1015/cm2, only a non-bonded N1s peak at the binding energy 398.7 eV was found, but further irradiation up to 2×1017/cm2 induced Al?O?N bonding at around 403 eV. The occurrence of Al?N bonding was identified at ion fluence higher than 5×1017/cm2 at 396.6 eV. II–VI ZnO thin films were grown on an untreated/ion-beam-induced sapphire surface by pulsed laser deposition (PLD) for the investigation of the modified-substrate effect on photoluminescence. The ZnO films grown on modified sapphire containing Al?O?N bonding only, and both Al?O?N and Al?N bonding showed a significant reduction of the peak related to deep-level defects in photoluminescence. These results are explained in terms of the formation of Al?N?O and Al?O?N layers and relaxation of the interfacial strain between Al2O3 and ZnO.  相似文献   
993.
This work presents new stabilised finite element methods for a bending moments formulation of the Reissner-Mindlin plate model. The introduction of the bending moment as an extra unknown leads to a new weak formulation, where the symmetry of this variable is imposed strongly in the space. This weak problem is proved to be well-posed, and stabilised Galerkin schemes for its discretisation are presented and analysed. The finite element methods are such that the bending moment tensor is sought in a finite element space constituted of piecewise linear continuos and symmetric tensors. Optimal error estimates are proved, and these findings are illustrated by representative numerical experiments.  相似文献   
994.
The efficient design of networks has been an important engineering task that involves challenging combinatorial optimization problems. Typically, a network designer has to select among several alternatives which links to establish so that the resulting network satisfies a given set of connectivity requirements and the cost of establishing the network links is as low as possible. The Minimum Spanning Tree problem, which is well-understood, is a nice example.In this paper, we consider the natural scenario in which the connectivity requirements are posed by selfish users who have agreed to share the cost of the network to be established according to a well-defined rule. The design proposed by the network designer should now be consistent not only with the connectivity requirements but also with the selfishness of the users. Essentially, the users are players in a so-called network design game and the network designer has to propose a design that is an equilibrium for this game. As it is usually the case when selfishness comes into play, such equilibria may be suboptimal. In this paper, we consider the following question: can the network designer enforce particular designs as equilibria or guarantee that efficient designs are consistent with users’ selfishness by appropriately subsidizing some of the network links? In an attempt to understand this question, we formulate corresponding optimization problems and present positive and negative results.  相似文献   
995.
996.
We propose the hybrid difference methods for partial differential equations (PDEs). The hybrid difference method is composed of two types of approximations: one is the finite difference approximation of PDEs within cells (cell FD) and the other is the interface finite difference (interface FD) on edges of cells. The interface finite difference is obtained from continuity of some physical quantities. The main advantages of this new approach are that the method can applied to non-uniform grids, retaining the optimal order of convergence and stability of the numerical method for the Stokes equations is obtained without introducing staggered grids.  相似文献   
997.
In numerous industrial CFD applications, it is usual to use two (or more) different codes to solve a physical phenomenon: where the flow is a priori assumed to have a simple behavior, a code based on a coarse model is applied, while a code based on a fine model is used elsewhere. This leads to a complex coupling problem with fixed interfaces. The aim of the present work is to provide a numerical indicator to optimize to position of these coupling interfaces. In other words, thanks to this numerical indicator, one could verify if the use of the coarser model and of the resulting coupling does not introduce spurious effects. In order to validate this indicator, we use it in a dynamical multiscale method with moving coupling interfaces. The principle of this method is to use as much as possible a coarse model instead of the fine model in the computational domain, in order to obtain an accuracy which is comparable with the one provided by the fine model. We focus here on general hyperbolic systems with stiff relaxation source terms together with the corresponding hyperbolic equilibrium systems. Using a numerical Chapman–Enskog expansion and the distance to the equilibrium manifold, we construct the numerical indicator. Based on several works on the coupling of different hyperbolic models, an original numerical method of dynamic model adaptation is proposed. We prove that this multiscale method preserves invariant domains and that the entropy of the numerical solution decreases with respect to time. The reliability of the adaptation procedure is assessed on various 1D and 2D test cases coming from two-phase flow modeling.  相似文献   
998.
In this article we present a new class of particle methods which aim at being accurate in the uniform norm with a minimal amount of smoothing. The crux of our approach is to compute local polynomial expansions of the characteristic flow to transport the particle shapes with improved accuracy. In the first order case the method consists of representing the transported density with linearly-transformed particles, the second order version transports quadratically-transformed particles, and so on. For practical purposes we provide discrete versions of the resulting LTP and QTP schemes that only involve pointwise evaluations of the forward characteristic flow, and we propose local indicators for the associated transport error. On a theoretical level we extend these particle schemes up to arbitrary polynomial orders and show by a rigorous analysis that for smooth flows the resulting methods converge in \(L^\infty \) without requiring remappings, extended overlapping or vanishing moments for the particles. Numerical tests using different passive transport problems demonstrate the accuracy of the proposed methods compared to basic particle schemes, and they establish their robustness with respect to the remapping period. In particular, it is shown that QTP particles can be transported without remappings on very long periods of time, without hampering the accuracy of the numerical solutions. Finally, a dynamic criterion is proposed to automatically select the time steps where the particles should be remapped. The strategy is a by-product of our error analysis, and it is validated by numerical experiments.  相似文献   
999.
In this article I argue for rule-based, non-monotonic theories of common law judicial reasoning and improve upon one such theory offered by Horty and Bench-Capon. The improvements reveal some of the interconnections between formal theories of judicial reasoning and traditional issues within jurisprudence regarding the notions of the ratio decidendi and obiter dicta. Though I do not purport to resolve the long-standing jurisprudential issues here, it is beneficial for theorists both of legal philosophy and formalizing legal reasoning to see where the two projects interact.  相似文献   
1000.
Inspired by the relational algebra of data processing, this paper addresses the foundations of data analytical processing from a linear algebra perspective. The paper investigates, in particular, how aggregation operations such as cross tabulations and data cubes essential to quantitative analysis of data can be expressed solely in terms of matrix multiplication, transposition and the Khatri–Rao variant of the Kronecker product. The approach offers a basis for deriving an algebraic theory of data consolidation, handling the quantitative as well as qualitative sides of data science in a natural, elegant and typed way. It also shows potential for parallel analytical processing, as the parallelization theory of such matrix operations is well acknowledged.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号