首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1955篇
  免费   379篇
  国内免费   262篇
工业技术   2596篇
  2025年   32篇
  2024年   163篇
  2023年   151篇
  2022年   196篇
  2021年   168篇
  2020年   151篇
  2019年   139篇
  2018年   128篇
  2017年   110篇
  2016年   121篇
  2015年   112篇
  2014年   127篇
  2013年   121篇
  2012年   119篇
  2011年   135篇
  2010年   85篇
  2009年   94篇
  2008年   88篇
  2007年   64篇
  2006年   56篇
  2005年   48篇
  2004年   31篇
  2003年   32篇
  2002年   35篇
  2001年   16篇
  2000年   17篇
  1999年   9篇
  1998年   5篇
  1997年   5篇
  1996年   5篇
  1995年   2篇
  1994年   5篇
  1993年   6篇
  1992年   3篇
  1991年   3篇
  1990年   2篇
  1989年   3篇
  1988年   2篇
  1987年   2篇
  1985年   1篇
  1983年   1篇
  1981年   2篇
  1980年   1篇
排序方式: 共有2596条查询结果,搜索用时 15 毫秒
1.
We introduce a new probabilistic approach to dealing with uncertainty, based on the observation that probability theory does not require that every event be assigned a probability. For a nonmeasurable event (one to which we do not assign a probability), we can talk about only the inner measure and outer measure of the event. In addition to removing the requirement that every event be assigned a probability, our approach circumvents other criticisms of probability-based approaches to uncertainty. For example, the measure of belief in an event turns out to be represented by an interval (defined by the inner and outer measures), rather than by a single number. Further, this approach allows us to assign a belief (inner measure) to an event E without committing to a belief about its negation -E (since the inner measure of an event plus the inner measure of its negation is not necessarily one). Interestingly enough, inner measures induced by probability measures turn out to correspond in a precise sense to Dempster-Shafer belief functions. Hence, in addition to providing promising new conceptual tools for dealing with uncertainty, our approach shows that a key part of the important Dempster-Shafer theory of evidence is firmly rooted in classical probability theory. Cet article présente une nouvelle approche probabiliste en ce qui concerne le traitement de l'incertitude; celle-ci est basée sur l'observation que la théorie des probabilityés n'exige pas qu'une probabilityé soit assignée à chaque événement. Dans le cas d'un événement non mesurable (un événement pour lequel on n'assigne aucune probabilityé), nous ne pouvons discuter que de la mesure intérieure et de la mesure extérieure de l'évenément. En plus d'éliminer la nécessité d'assigner une probabilityéà l'événement, cette nouvelle approche apporte une réponse aux autres critiques des approches à l'incertitude basées sur des probabilityés. Par exemple, la mesure de croyance dans un événement est représentée par un intervalle (défini par la mesure intérieure et extérieure) plutǒt que par un nombre unique. De plus, cette approche nous permet d'assigner une croyance (mesure intérieure) à un événement E sans se compromettre vers une croyance à propos de sa négation -E (puisque la mesure intérieure d'un événement et la mesure intérieure de sa négation ne sont pas nécessairement une seule et unique mesure). II est intéressant de noter que les mesures intérieures qui résultent des mesures de probabilityé correspondent d'une manière précise aux fonctions de croyance de Dempster-Shafer. En plus de constituer un nouvel outil conceptuel prometteur dans le traitement de l'incertitude, cette approche démontre qu'une partie importante de la théorie de l'évidence de Dempster-Shafer est fermement ancrée dans la theorie classique des probabilityés.  相似文献   
2.
The evolutionary design can produce fast and efficient implementations of digital circuits. It is shown in this paper how evolved circuits, optimized for the latency and area, can increase the throughput of a manually designed classifier of application protocols. The classifier is intended for high speed networks operating at 100 Gbps. Because a very low latency is the main design constraint, the classifier is constructed as a combinational circuit in a field programmable gate array (FPGA). The classification is performed using the first packet carrying the application payload. The improvements in latency (and area) obtained by Cartesian genetic programming are validated using a professional FPGA design tool. The quality of classification is evaluated by means of real network data. All results are compared with commonly used classifiers based on regular expressions describing application protocols.  相似文献   
3.
Abstact The subject of this paper is the investigation of finite-size effects and the determination of critical parameters for a class of truncated Lennard-Jones potentials. Despite significant recent progress in our ability to model phase equilibria in multicomponent mixtures from direct molecular simulations, the accurate determination of critical parameters remains a difficult problem. Gibbs ensemble Monte Carlo simulations with systems of controlled linear system size are used to obtain the phase behavior in the near-critical region for two- and three dimensional Lennard-Jones fluids with reduced cutoff radii of 3, 3.5, and 5. For the two-dimensional systems, crossover of the effective exponent for the width of the coexistence curve from mean field ( = 1/2 in the immediate vicinity of the critical point to Ising-like (= 1/8) farther away is observed. Critical parameters determined by fitting the data that follow Ising-like behavior are in good agreement with literature values obtained with finite-size scaling methods. For the three-dimensional systems, no crossover to mean field-type behavior was apparent. Extrapolated results for the critical parameters are consistent with literature estimates for similar fluids. For both two- and three-dimensional fluids, system size effects on the coexistence curves away from the critical point are small, normally within simulation statistical uncertainties.Invited paper presented at the Twelfth Symposium on Thermophysical Properties, June 19–24, 1994, Boulder. Colorado, U.S.A.  相似文献   
4.
入侵检测问题可以模型化为数据流分类问题,传统的数据流分类算法需要标注大量的训练样本,代价昂贵,降低了相关算法的实用性。在PU学习算法中,仅需标注部分正例样本就可以构造分类器。对此本文提出一种动态的集成PU学习数据流分类的入侵检测方法,只需要人工标注少量的正例样本,就可以构造数据流分类器。在人工数据集和真实数据集上的实验表明,该方法具有较好的分类性能,在处理偏斜数据流上优于三种PU 学习分类方法,并具有较高的入侵检测率。  相似文献   
5.
经典分子动力学模拟的主要技术   总被引:6,自引:0,他引:6  
综述了分子动力学模拟的基本原理、发展过程及主要应用,介绍了原子间势函数的发展及势参数的确定,给出了分子动力学模拟中相关的有限差分算法、初始条件及边界条件的选取、平衡态系综及其调控、感兴趣量的提取及主要过程。最后还指出了分子动力学模拟方法本身进一步的研究方向。  相似文献   
6.
The results of experiments with a novel criterion for absolute non-parametric feature selection are reported. The basic idea of the new technique involves the use of computer graphics and the human pattern recognition ability to interactively choose a number of features, this number not being necessarily determined in advance, from a larger set of measurements. The triangulation method, recently proposed in the cluster analysis literature for mapping points from l-space to 2-space, is used to yield a simple and efficient algorithm for feature selection by interactive clustering. It is shown that a subset of features can thus be chosen which allows a significant reduction in storage and time while still keeping the probability of error in classification within reasonable bounds.  相似文献   
7.
倪渊  林健 《工业工程》2012,15(2):66-70
为了进一步提高SVM集成的泛化能力,提出了基于Choquet模糊积分的SVMs集成方法,综合考虑各个子SVM输出重要性,避免了现有SVM集成方法中忽略次要信息的问题.应用该方法,以高校的区域经济贡献度为例进行仿真试验,结果表明基于Choquet模糊积分的SVMs集成方法较基于Sugeno模糊积分SVMs集成方法和基于投票策略的SVMs集成方法具有更高的准确性.该方法是可行、有效的,具有一定的推广价值.  相似文献   
8.
用新开发的新型旋风式微细分级机进行水泥微细分级试验.结果表明.当原动中10μm以下的颗粒含量52.6%,经分级后的细粉中小于10μm的颗粒含量达88.9%,平均粒径为4.06μm.分级切割粒径为11.8μm分级精度d_(75)/d_(25)<1.5.  相似文献   
9.
    
Ensemble Methods are proposed as a means to extendbiAdaptive One‐Factor‐at‐a‐Time (aOFAT) experimentation. The proposed method executes multiple aOFAT experiments on the same system with minor differences in experimental setup, such as ‘starting points’. Experimental conclusions are arrived at by aggregating the multiple, individual aOFATs. A comparison is made to test the performance of the new method with that of a traditional form of experimentation, namely a single fractional factorial design which is equally resource intensive. The comparisons between the two experimental algorithms are conducted using a hierarchical probability meta‐model and an illustrative case study. The case is a wet clutch system with the goal of minimizing drag torque. In this study, the proposed procedure was superior in performance to using fractional factorial arrays consistently across various experimental settings. At the best, the proposed algorithm provides an expected value of improvement that is 15% higher than the traditional approach; at the worst, the two methods are equally effective, and on average the improvement is about 10% higher with the new method. These findings suggest that running multiple adaptive experiments in parallel can be an effective way to make improvements in quality and performance of engineering systems and also provides a reasonable aggregation procedure by which to bring together the results of the many separate experiments. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
10.
杨菊  袁玉龙  于化龙 《计算机科学》2016,43(10):266-271
针对现有极限学习机集成学习算法分类精度低、泛化能力差等缺点,提出了一种基于蚁群优化思想的极限学习机选择性集成学习算法。该算法首先通过随机分配隐层输入权重和偏置的方法生成大量差异的极限学习机分类器,然后利用一个二叉蚁群优化搜索算法迭代地搜寻最优分类器组合,最终使用该组合分类测试样本。通过12个标准数据集对该算法进行了测试,该算法在9个数据集上获得了最优结果,在另3个数据集上获得了次优结果。采用该算法可显著提高分类精度与泛化性能。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号

京公网安备 11010802026262号