首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   32326篇
  免费   5963篇
  国内免费   4335篇
工业技术   42624篇
  2024年   174篇
  2023年   754篇
  2022年   1482篇
  2021年   1761篇
  2020年   1653篇
  2019年   1231篇
  2018年   1111篇
  2017年   1282篇
  2016年   1369篇
  2015年   1509篇
  2014年   2175篇
  2013年   1950篇
  2012年   2436篇
  2011年   2653篇
  2010年   2115篇
  2009年   2149篇
  2008年   2188篇
  2007年   2439篇
  2006年   2062篇
  2005年   1718篇
  2004年   1432篇
  2003年   1273篇
  2002年   1086篇
  2001年   734篇
  2000年   634篇
  1999年   581篇
  1998年   451篇
  1997年   338篇
  1996年   284篇
  1995年   252篇
  1994年   207篇
  1993年   181篇
  1992年   135篇
  1991年   76篇
  1990年   75篇
  1989年   85篇
  1988年   56篇
  1987年   38篇
  1986年   39篇
  1985年   48篇
  1984年   45篇
  1983年   60篇
  1982年   44篇
  1981年   42篇
  1980年   33篇
  1979年   28篇
  1978年   29篇
  1977年   29篇
  1976年   24篇
  1974年   12篇
排序方式: 共有10000条查询结果,搜索用时 125 毫秒
1.
现代战场中的无线通信设备日益增多,精准获取个体信息已成为研究热点,但也是难点。针对通信电台,提出了一种分选识别技术。该技术从电台物理层特性出发,对其辐射信号的细微特征进行K-means聚类以实现分选,分选的同时提取各个个体的特征属性值,未知信号通过与特征属性值相关运算实现个体识别。该技术无需先验知识,无需训练运算,通过实验验证,其可行、高效,易于工程实现。  相似文献   
2.
The International Society for the Study of Vascular Anomalies (ISSVA) provides a classification for vascular anomalies that enables specialists to unambiguously classify diagnoses. This classification is only available in PDF format and is not machine-readable, nor does it provide unique identifiers that allow for structured registration. In this paper, we describe the process of transforming the ISSVA classification into an ontology. We also describe the structure of this ontology, as well as two applications of the ontology using examples from the domain of rare disease research. We used the expertise of an ontology expert and clinician during the development process. We semi-automatically added mappings to relevant external ontologies using automated ontology matching systems and manual assessment by experts. The ISSVA ontology should contribute to making data for vascular anomaly research more Findable, Accessible, Interoperable, and Reusable (FAIR). The ontology is available at https://bioportal.bioontology.org/ontologies/ISSVA.  相似文献   
3.
Manufacturing companies not only strive to deliver flawless products but also monitor product failures in the field to identify potential quality issues. When product failures occur, quality engineers must identify the root cause to improve any affected product and process. This root-cause analysis can be supported by feature selection methods that identify relevant product attributes, such as manufacturing dates with an increased number of product failures. In this paper, we present different methods for feature selection and evaluate their ability to identify relevant product attributes in a root-cause analysis. First, we compile a list of feature selection methods. Then, we summarize the properties of product attributes in warranty case data and discuss these properties regarding the challenges they pose for machine learning algorithms. Next, we simulate datasets of warranty cases, which emulate these product properties. Finally, we compare the feature selection methods based on these simulated datasets. In the end, the univariate filter information gain is determined to be a suitable method for a wide range of applications. The comparison based on simulated data provides a more general result than other publications, which only focus on a single use case. Due to the generic nature of the simulated datasets, the results can be applied to various root-cause analysis processes in different quality management applications and provide a guideline for readers who wish to explore machine learning methods for their analysis of quality data.  相似文献   
4.
5.
Traditionally, in supervised machine learning, (a significant) part of the available data (usually 50%-80%) is used for training and the rest—for validation. In many problems, however, the data are highly imbalanced in regard to different classes or does not have good coverage of the feasible data space which, in turn, creates problems in validation and usage phase. In this paper, we propose a technique for synthesizing feasible and likely data to help balance the classes as well as to boost the performance in terms of confusion matrix as well as overall. The idea, in a nutshell, is to synthesize data samples in close vicinity to the actual data samples specifically for the less represented (minority) classes. This has also implications to the so-called fairness of machine learning. In this paper, we propose a specific method for synthesizing data in a way to balance the classes and boost the performance, especially of the minority classes. It is generic and can be applied to different base algorithms, for example, support vector machines, k-nearest neighbour classifiers deep neural, rule-based classifiers, decision trees, and so forth. The results demonstrated that (a) a significantly more balanced (and fair) classification results can be achieved and (b) that the overall performance as well as the performance per class measured by confusion matrix can be boosted. In addition, this approach can be very valuable for the cases when the number of actual available labelled data is small which itself is one of the problems of the contemporary machine learning.  相似文献   
6.
7.
《云南化工》2020,(1):168-169
就油藏开发地质勘探的常见问题进行分析,从油藏开发的分类、油藏开发程序以及油藏储量计算等方面综合考虑影响油藏开发的因素,为现实油藏开发工程提供理论依据,提升油藏开发技术水准。  相似文献   
8.
Although greedy algorithms possess high efficiency, they often receive suboptimal solutions of the ensemble pruning problem, since their exploration areas are limited in large extent. And another marked defect of almost all the currently existing ensemble pruning algorithms, including greedy ones, consists in: they simply abandon all of the classifiers which fail in the competition of ensemble selection, causing a considerable waste of useful resources and information. Inspired by these observations, an interesting greedy Reverse Reduce-Error (RRE) pruning algorithm incorporated with the operation of subtraction is proposed in this work. The RRE algorithm makes the best of the defeated candidate networks in a way that, the Worst Single Model (WSM) is chosen, and then, its votes are subtracted from the votes made by those selected components within the pruned ensemble. The reason is because, for most cases, the WSM might make mistakes in its estimation for the test samples. And, different from the classical RE, the near-optimal solution is produced based on the pruned error of all the available sequential subensembles. Besides, the backfitting step of RE algorithm is replaced with the selection step of a WSM in RRE. Moreover, the problem of ties might be solved more naturally with RRE. Finally, soft voting approach is employed in the testing to RRE algorithm. The performances of RE and RRE algorithms, and two baseline methods, i.e., the method which selects the Best Single Model (BSM) in the initial ensemble, and the method which retains all member networks of the initial ensemble (ALL), are evaluated on seven benchmark classification tasks under different initial ensemble setups. The results of the empirical investigation show the superiority of RRE over the other three ensemble pruning algorithms.  相似文献   
9.
This paper presents an innovative solution to model distributed adaptive systems in biomedical environments. We present an original TCBR-HMM (Text Case Based Reasoning-Hidden Markov Model) for biomedical text classification based on document content. The main goal is to propose a more effective classifier than current methods in this environment where the model needs to be adapted to new documents in an iterative learning frame. To demonstrate its achievement, we include a set of experiments, which have been performed on OSHUMED corpus. Our classifier is compared with Naive Bayes and SVM techniques, commonly used in text classification tasks. The results suggest that the TCBR-HMM Model is indeed more suitable for document classification. The model is empirically and statistically comparable to the SVM classifier and outperforms it in terms of time efficiency.  相似文献   
10.
王传旭  薛豪 《电子学报》2020,48(8):1465-1471
提出一种以"关键人物"为核心,使用门控融合单元(GFU,Gated Fusion Unit)进行特征融合的组群行为识别框架,旨在解决两个问题:①组群行为信息冗余,重点关注关键人物行为特征,忽略无关人员对组群行为的影响;②组群内部交互行为复杂,使用GFU有效融合以关键人物为核心的交互特征,再通过LSTM时序建模成为表征能力更强的组群特征.最终,通过softmax分类器进行组群行为类别分类.该算法在排球数据集上取得了86.7%的平均识别率.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号