首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1079篇
  免费   105篇
  国内免费   2篇
工业技术   1186篇
  2024年   1篇
  2023年   17篇
  2022年   27篇
  2021年   77篇
  2020年   36篇
  2019年   64篇
  2018年   52篇
  2017年   54篇
  2016年   58篇
  2015年   48篇
  2014年   43篇
  2013年   76篇
  2012年   90篇
  2011年   99篇
  2010年   73篇
  2009年   72篇
  2008年   60篇
  2007年   63篇
  2006年   24篇
  2005年   27篇
  2004年   27篇
  2003年   13篇
  2002年   13篇
  2001年   8篇
  2000年   2篇
  1999年   8篇
  1998年   8篇
  1997年   4篇
  1996年   8篇
  1995年   3篇
  1994年   2篇
  1993年   3篇
  1992年   4篇
  1990年   2篇
  1988年   1篇
  1987年   3篇
  1986年   1篇
  1985年   1篇
  1984年   3篇
  1982年   3篇
  1981年   3篇
  1979年   1篇
  1976年   1篇
  1975年   1篇
  1972年   1篇
  1957年   1篇
排序方式: 共有1186条查询结果,搜索用时 46 毫秒
31.
We provide a discussion of bounded rationality learning behind traditional learning mechanisms, i.e., Recursive Ordinary Least Squares and Bayesian Learning . These mechanisms lack for many reasons a behavioral interpretation and, following the Simon criticism, they appear to be substantively rational. In this paper, analyzing the Cagan model, we explore two learning mechanisms which appear to be more plausible from a behavioral point of view and somehow procedurally rational: Least Mean Squares learning for linear models and Back Propagation for Artificial Neural Networks . The two algorithms look for a minimum of the variance of the error forecasting by means of a steepest descent gradient procedure. The analysis of the Cagan model shows an interesting result: non-convergence of learning to the Rational Expectations Equilibrium is not due to the restriction to linear learning devices; also Back Propagation learning for Artificial Neural Networks may fail to converge to the Rational Expectations Equilibrium of the model.  相似文献   
32.
Genetic programming (GP) is one of the most widely used paradigms of evolutionary computation due to its ability to automatically synthesize computer programs and mathematical expressions. However, because GP uses a variable length representation, the individuals within the evolving population tend to grow rapidly without a corresponding return in fitness improvement, a phenomenon known as bloat. In this paper, we present a simple bloat control strategy for standard tree-based GP that achieves a one order of magnitude reduction in bloat when compared with standard GP on benchmark tests, and practically eliminates bloat on two real-world problems. Our proposal is to substitute standard subtree crossover with the one-point crossover (OPX) developed by Poli and Langdon (Second online world conference on soft computing in engineering design and manufacturing, Springer, Berlin (1997)), while maintaining all other GP aspects standard, particularly subtree mutation. OPX was proposed for theoretical purposes related to GP schema theorems, however since it curtails exploration during the search it has never achieved widespread use. In our results, on the other hand, we are able to show that OPX can indeed perform an effective search if it is coupled with subtree mutation, thus combining the bloat control capabilities of OPX with the exploration provided by standard mutation.  相似文献   
33.

Context

It is important for Product Line Architectures (PLA) to remain stable accommodating evolutionary changes of stakeholder’s requirements. Otherwise, architectural modifications may have to be propagated to products of a product line, thereby increasing maintenance costs. A key challenge is that several features are likely to exert a crosscutting impact on the PLA decomposition, thereby making it more difficult to preserve its stability in the presence of changes. Some researchers claim that the use of aspects can ameliorate instabilities caused by changes in crosscutting features. Hence, it is important to understand which aspect-oriented (AO) and non-aspect-oriented techniques better cope with PLA stability through evolution.

Objective

This paper evaluates the positive and negative change impact of component and aspect based design on PLAs. The objective of the evaluation is to assess how aspects and components promote PLA stability in the presence of various types of evolutionary change. To support a broader analysis, we also evaluate the PLA stability of a hybrid approach (i.e. combined use of aspects and components) against the isolated use of component-based, OO, and AO approaches.

Method

An quantitative and qualitative analysis of PLA stability which involved four different implementations of a PLA: (i) an OO implementation, (ii) an AO implementation, (iii) a component-based implementation, and (iv) a hybrid implementation where both components and aspects are employed. Each implementation has eight releases and they are functionally equivalent. We used conventional metrics suites for change impact and modularity to measure the architecture stability evaluation of the 4 implementations.

Results

The combination of aspects and components promotes superior PLA resilience than the other PLAs in most of the circumstances.

Conclusion

It is concluded that the combination of aspects and components supports the design of high cohesive and loosely coupled PLAs. It also contributes to improve modularity by untangling feature implementation.  相似文献   
34.
This paper presents a systematic approach for decreasing conservativeness in stability analysis and control design for Takagi-Sugeno (TS) systems. This approach is based on the idea of multiple Lyapunov functions together with simple techniques for introducing slack matrices. Unlike some previous approaches based on multiple Lyapunov functions, both the stability and the stabilization conditions are written as linear matrix inequality (LMI) problems. The proposed approach reduces the number of inequalities and guarantees extra degrees of freedom to the LMI problems. Numeric examples illustrate the effectiveness of this method.  相似文献   
35.
The quality of shadow mapping is traditionally limited by texture resolution. We present a novel lossless compression scheme for high‐resolution shadow maps based on precomputed multiresolution hierarchies. Traditional multiresolution trees can compactly represent homogeneous regions of shadow maps at coarser levels, but require many nodes for fine details. By conservatively adapting the depth map, we can significantly reduce the tree complexity. Our proposed method offers high compression rates, avoids quantization errors, exploits coherency along all data dimensions, and is well‐suited for GPU architectures. Our approach can be applied for coherent shadow maps as well, enabling several applications, including high‐quality soft shadows and dynamic lights moving on fixed‐trajectories.  相似文献   
36.
Feature annotations (e.g., code fragments guarded by #ifdef C-preprocessor directives) control code extensions related to features. Feature annotations have long been said to be undesirable. When maintaining features that control many annotations, there is a high risk of ripple effects. Also, excessive use of feature annotations leads to code clutter, hinder program comprehension and harden maintenance. To prevent such problems, developers should monitor the use of feature annotations, for example, by setting acceptable thresholds. Interestingly, little is known about how to extract thresholds in practice, and which values are representative for feature-related metrics. To address this issue, we analyze the statistical distribution of three feature-related metrics collected from a corpus of 20 well-known and long-lived C-preprocessor-based systems from different domains. We consider three metrics: scattering degree of feature constants, tangling degree of feature expressions, and nesting depth of preprocessor annotations. Our findings show that feature scattering is highly skewed; in 14 systems (70 %), the scattering distributions match a power law, making averages and standard deviations unreliable limits. Regarding tangling and nesting, the values tend to follow a uniform distribution; although outliers exist, they have little impact on the mean, suggesting that central statistics measures are reliable thresholds for tangling and nesting. Following our findings, we then propose thresholds from our benchmark data, as a basis for further investigations.  相似文献   
37.
Key distribution in Wireless Sensor Networks (WSNs) is challenging. Symmetric cryptosystems can perform it efficiently, but they often do not provide a perfect trade-off between resilience and storage. Further, even though conventional public key and elliptic curve cryptosystems are computationally feasible on sensor nodes, protocols based on them are not, as they require the exchange and storage of large keys and certificates, which is expensive.Using Pairing-Based Cryptography (PBC) protocols parties can agree on keys without any interaction. In this work, we (i) show how security in WSNs can be bootstrapped using an authenticated identity-based non-interactive protocol and (ii) present TinyPBC, to our knowledge, the most efficient implementation of PBC primitives for 8, 16 and 32-bit processors commonly found in sensor nodes. TinyPBC is able to compute pairings, the most expensive primitive of PBC, in 1.90 s on ATmega128L, 1.27 s on MSP430 and 0.14 s on PXA27x.  相似文献   
38.
The novel concept of equivalent state randomly oriented flaws developed from the generalized fracture toughness theory [1] is presented. Based on this concept, planar defects located in multiaxial stress field regions, characterized by modes I, II, and III stress intensity factor combinations, are distinguished by a mode I equivalent state stress intensity factor % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGak0dh9WrFfpC0xh9vqqj-hEeeu0xXdbba9frFj0-OqFf% ea0dXdd9vqaq-JfrVkFHe9pgea0dXdar-Jb9hs0dXdbPYxe9vr0-vr% 0-vqpWqaaeaabiGaaiaacaqabeaadaqaaqaaaOqaaiqadUeagaqeaa% aa!3846!\[\bar K\]1 of identical function. Accordingly, the complex mode fracture criterion is exactly replaced by the conventional mode I criterion % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGak0dh9WrFfpC0xh9vqqj-hEeeu0xXdbba9frFj0-OqFf% ea0dXdd9vqaq-JfrVkFHe9pgea0dXdar-Jb9hs0dXdbPYxe9vr0-vr% 0-vqpWqaaeaabiGaaiaacaqabeaadaqaaqaaaOqaaiqadUeagaqeaa% aa!3846!\[\bar K\]1 K 1C . It is demonstrated that this criterion is mathematically equivalent to other more complex generalized fracture criteria [2,4,5], i.e., it predicts the same critical conditions.Current approximate procedures applied to crack-like defects detected in structural components, based on reorienting or orthogonally projecting the defect over a plane normal to the maximum principal tensile stress, are discussed and applied to two simple structural applications. When the results are compared with those from the proposed equivalent state flaw method, it is concluded that, to a large extent, the procedures are inconsistent and generate significant errors that may lead to incorrect decisions over the remaining service life of the structure.The equivalent state flaw concept is used to establish the equivalent state mode I threshold value % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGak0dh9WrFfpC0xh9vqqj-hEeeu0xXdbba9frFj0-OqFf% ea0dXdd9vqaq-JfrVkFHe9pgea0dXdar-Jb9hs0dXdbPYxe9vr0-vr% 0-vqpWqaaeaabiGaaiaacaqabeaadaqaaqaaaOqaaiqadUeagaqeaa% aa!3846!\[\bar K\]1 corresponding to complex stress state fatigue loadings.
Résumé On présente le concept original de défauts équivalents répartis au hasard, développé à partir de la théorie généralisée sur la ténacité à la rupture [Réf. 1]. Sur base de ce concept, des défauts plans situés dans des zones à champs de contrainte multi-axiale, et caractérisés par des facteurs d'intensité de contrainte combinant les modes I, II et III, sont caractérisés par un facteur d'intensité de contrainte équivalent % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGak0dh9WrFfpC0xh9vqqj-hEeeu0xXdbba9frFj0-OqFf% ea0dXdd9vqaq-JfrVkFHe9pgea0dXdar-Jb9hs0dXdbPYxe9vr0-vr% 0-vqpWqaaeaabiGaaiaacaqabeaadaqaaqaaaOqaaiqadUeagaqeaa% aa!3846!\[\bar K\]1; relatif à un mode I et occupant la même fonction. Dès lors, le critère décrivant la rupture sous un mode complexe est en tous points remplacé par le critère conventionnel % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGak0dh9WrFfpC0xh9vqqj-hEeeu0xXdbba9frFj0-OqFf% ea0dXdd9vqaq-JfrVkFHe9pgea0dXdar-Jb9hs0dXdbPYxe9vr0-vr% 0-vqpWqaaeaabiGaaiaacaqabeaadaqaaqaaaOqaaiqadUeagaqeaa% aa!3846!\[\bar K\]1 K 1C .Ce critère est mathématiquement equivalent aux autres critères généralisés de rupture, de forme plus complexe [Réf. 2, 4, 5], en ce qu'il prédit les mêmes conditions critiques.On discute, et on applique à deux cas de structures simples, les procédures habituelles d'approximation pour des défauts assimilables à des fissures détectés dans des composants. Ces défauts sont réorientés ou projetés orthogonalement sur un plan normal à la plus grande tension principale.Lorsqu'on compare les résultats de ces procédures d'approximation à ceux que fournit la méthode proposée, on en conclut que ces procédures sont, dans une large mesure, incorrectes, et qu'elles donnent lieu à des erreurs importantes susceptibles de conduire à des décisions erronées sur la vie résiduelle d'une structure.Le concept de défaut équivalent est utilisé pour établir une valeur critique équivalente % MathType!MTEF!2!1!+-% feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn% hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr% 4rNCHbGeaGak0dh9WrFfpC0xh9vqqj-hEeeu0xXdbba9frFj0-OqFf% ea0dXdd9vqaq-JfrVkFHe9pgea0dXdar-Jb9hs0dXdbPYxe9vr0-vr% 0-vqpWqaaeaabiGaaiaacaqabeaadaqaaqaaaOqaaiqadUeagaqeaa% aa!3846!\[\bar K\]1 en mode I, correspondant à le seuil des sollicitations de fatigue de mode complexe.


Operated for the U.S. Department of Energy, Contract No. DE-AC12-76-N0052.  相似文献   
39.
The adipose fin is small, nonpared, and usually located medially between the dorsal and caudal fin. Its taxonomic occurrence is very restrict; thus, it represents an important trace for taxon distinction. As it does not play a known vital physiological roll and it is easily removed, it is commonly used in marking and recapture studies. The present study characterizes the adipose fin of Prochilodus lineatus, as it is poorly explored by the literature. The adipose fin consists basically of a loose connective core, covered by a stratified epithelium supported by collagen fibers. At the epithelium, pigmented cells and alarm substance cells are found. Despite the name, adipocytes or lipid droplets are not observed on the structure of the fin.  相似文献   
40.
We propose a general framework to incorporate first-order logic (FOL) clauses, that are thought of as an abstract and partial representation of the environment, into kernel machines that learn within a semi-supervised scheme. We rely on a multi-task learning scheme where each task is associated with a unary predicate defined on the feature space, while higher level abstract representations consist of FOL clauses made of those predicates. We re-use the kernel machine mathematical apparatus to solve the problem as primal optimization of a function composed of the loss on the supervised examples, the regularization term, and a penalty term deriving from forcing real-valued constraints deriving from the predicates. Unlike for classic kernel machines, however, depending on the logic clauses, the overall function to be optimized is not convex anymore. An important contribution is to show that while tackling the optimization by classic numerical schemes is likely to be hopeless, a stage-based learning scheme, in which we start learning the supervised examples until convergence is reached, and then continue by forcing the logic clauses is a viable direction to attack the problem. Some promising experimental results are given on artificial learning tasks and on the automatic tagging of bibtex entries to emphasize the comparison with plain kernel machines.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号