首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   225946篇
  免费   23124篇
  国内免费   14590篇
工业技术   263660篇
  2024年   616篇
  2023年   2671篇
  2022年   5376篇
  2021年   6833篇
  2020年   7060篇
  2019年   5848篇
  2018年   5573篇
  2017年   6917篇
  2016年   8262篇
  2015年   9004篇
  2014年   14719篇
  2013年   14355篇
  2012年   16962篇
  2011年   18613篇
  2010年   13633篇
  2009年   13794篇
  2008年   13565篇
  2007年   16031篇
  2006年   13845篇
  2005年   11845篇
  2004年   10054篇
  2003年   8568篇
  2002年   6802篇
  2001年   5501篇
  2000年   4701篇
  1999年   3766篇
  1998年   3165篇
  1997年   2670篇
  1996年   2199篇
  1995年   1845篇
  1994年   1598篇
  1993年   1182篇
  1992年   1043篇
  1991年   803篇
  1990年   656篇
  1989年   568篇
  1988年   465篇
  1987年   324篇
  1986年   282篇
  1985年   288篇
  1984年   306篇
  1983年   298篇
  1982年   243篇
  1981年   128篇
  1980年   108篇
  1979年   93篇
  1978年   62篇
  1977年   62篇
  1976年   55篇
  1959年   36篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
41.
伴随着我国社会经济的稳固发展,人们的生活质量逐渐有了明显改善,而在生态问题趋于严重使人们的环境保护意识显著提高的过程中,森林抚育对于森林生态系统的作用和价值愈发凸显。对此,文章针对森林抚育,从提高森林利用率、维持森林动植物生态平衡、增加可利用水资源等多个方面就其对森林生态系统的影响进行了分析,旨在给予相关森林保护工作者可行的帮助,并以此促进我国森林开发工作的可持续发展。  相似文献   
42.
为了使折臂式随车起重机转台在最大应力不超过材料许用应力的前提下实现轻量化设计的目的,通过ADAMS软件仿真确定其极限工况,对转台进行静力学分析及拓扑优化。通过拓扑优化得到了理想的材料分布。基于优化结果调整转台结构,并对改进后的转台进行静力学分析。结果表明,优化后的转台能够满足实际的使用需求。同时,转台质量降低了12%,证明了优化设计的有效性和可行性,并为折臂式随车起重机的相关设计提供了一定的借鉴。  相似文献   
43.
针对某乘用车发动机转速在1 573 r/min,压缩机开启时车内噪声异常的问题,对样车进行试验分析与诊断,对压缩机-支架系统进行仿真分析,提出改进方案并验证改进效果。利用LMS声振信号采集系统采集振动噪声数据,采用频谱分析、阶次追踪等方法,并结合压缩机-支架系统模态仿真结果,确定车内异常噪声是压缩机轴频21阶与压缩机-支架系统3阶模态频率接近发生共振造成的。通过优化支架结构来提高压缩机-支架系统3阶模态频率以此来避免共振,并换装橡胶驱动盘缓和压缩机输入扭矩波动。将改进结构进行整车试验,结果表明:匀速工况空调开启时问题转速下,车内噪声降低了2.5 dB(A);匀加速工况空调开启时发动机转速1 500~1 650 r/min区间,车内噪声无峰值,其余转速空调开启时改进前/后车内噪声基本不变,噪声波动趋势平缓。  相似文献   
44.
45.
针对异构计算节点组成的大规模多状态计算系统的容错性能分析问题,提出了一种计算系统容错性能的评估方法。该方法采用自定义的两级容错性能形式化描述框架进行系统描述,通过构造多值决策图(Multi-value Decision Diagram,MDD)模型对系统进行容错性能建模,并基于构造的模型高效地计算出部件故障的条件下计算系统在特定性能水平上运行的概率,减少了计算的冗余性。实验结果表明,该方法在模型的大小和构建时间上均优于传统方法。该方法的提出将对系统操作员或程序设计者具有重要意义,使其确保系统适合预期应用。  相似文献   
46.
Abstract

Data mining techniques have been successfully utilized in different applications of significant fields, including medical research. With the wealth of data available within the health-care systems, there is a lack of practical analysis tools to discover hidden relationships and trends in data. The complexity of medical data that is unfavorable for most models is a considerable challenge in prediction. The ability of a model to perform accurately and efficiently in disease diagnosis is extremely significant. Thus, the model must be selected to fit the data better, such that the learning from previous data is most efficient, and the diagnosis of the disease is highly accurate. This work is motivated by the limited number of regression analysis tools for multivariate counts in the literature. We propose two regression models for count data based on flexible distributions, namely, the multinomial Beta-Liouville and multinomial scaled Dirichlet, and evaluated the proposed models in the problem of disease diagnosis. The performance is evaluated based on the accuracy of the prediction which depends on the nature and complexity of the dataset. Our results show the efficiency of the two proposed regression models where the prediction performance of both models is competitive to other previously used regression models for count data and to the best results in the literature.  相似文献   
47.
Topic modeling is a popular analytical tool for evaluating data. Numerous methods of topic modeling have been developed which consider many kinds of relationships and restrictions within datasets; however, these methods are not frequently employed. Instead many researchers gravitate to Latent Dirichlet Analysis, which although flexible and adaptive, is not always suited for modeling more complex data relationships. We present different topic modeling approaches capable of dealing with correlation between topics, the changes of topics over time, as well as the ability to handle short texts such as encountered in social media or sparse text data. We also briefly review the algorithms which are used to optimize and infer parameters in topic modeling, which is essential to producing meaningful results regardless of method. We believe this review will encourage more diversity when performing topic modeling and help determine what topic modeling method best suits the user needs.  相似文献   
48.
Traditionally, in supervised machine learning, (a significant) part of the available data (usually 50%-80%) is used for training and the rest—for validation. In many problems, however, the data are highly imbalanced in regard to different classes or does not have good coverage of the feasible data space which, in turn, creates problems in validation and usage phase. In this paper, we propose a technique for synthesizing feasible and likely data to help balance the classes as well as to boost the performance in terms of confusion matrix as well as overall. The idea, in a nutshell, is to synthesize data samples in close vicinity to the actual data samples specifically for the less represented (minority) classes. This has also implications to the so-called fairness of machine learning. In this paper, we propose a specific method for synthesizing data in a way to balance the classes and boost the performance, especially of the minority classes. It is generic and can be applied to different base algorithms, for example, support vector machines, k-nearest neighbour classifiers deep neural, rule-based classifiers, decision trees, and so forth. The results demonstrated that (a) a significantly more balanced (and fair) classification results can be achieved and (b) that the overall performance as well as the performance per class measured by confusion matrix can be boosted. In addition, this approach can be very valuable for the cases when the number of actual available labelled data is small which itself is one of the problems of the contemporary machine learning.  相似文献   
49.
Data and software are nowadays one and the same: for this very reason, the European Union (EU) and other governments introduce frameworks for data protection — a key example being the General Data Protection Regulation (GDPR). However, GDPR compliance is not straightforward: its text is not written by software or information engineers but rather, by lawyers and policy-makers. As a design aid to information engineers aiming for GDPR compliance, as well as an aid to software users’ understanding of the regulation, this article offers a systematic synthesis and discussion of it, distilled by the mathematical analysis method known as Formal Concept Analysis (FCA). By its principles, GDPR is synthesised as a concept lattice, that is, a formal summary of the regulation, featuring 144372 records — its uses are manifold. For example, the lattice captures so-called attribute implications, the implicit logical relations across the regulation, and their intensity. These results can be used as drivers during systems and services (re-)design, development, operation, or information systems’ refactoring towards more GDPR consistency.  相似文献   
50.
By leveraging the secret data coding using the remainder storage based exploiting modification direction (RSBEMD), and the pixel change operation recording based on multi-segment left and right histogram shifting, a novel reversible data hiding (RHD) scheme is proposed in this paper. The secret data are first encoded by some specific pixel change operations to the pixels in groups. After that, multi-segment left and right histogram shifting based on threshold manipulation is implemented for recording the pixel change operations. Furthermore, a multiple embedding policy based on chess board prediction (CBP) and threshold manipulation is put forward, and the threshold can be adjusted to achieve adaptive data hiding. Experimental results and analysis show that it is reversible and can achieve good performance in capacity and imperceptibility compared with the existing methods.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号