首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6798篇
  免费   420篇
  国内免费   8篇
工业技术   7226篇
  2024年   18篇
  2023年   80篇
  2022年   287篇
  2021年   364篇
  2020年   195篇
  2019年   221篇
  2018年   269篇
  2017年   240篇
  2016年   332篇
  2015年   202篇
  2014年   320篇
  2013年   541篇
  2012年   470篇
  2011年   528篇
  2010年   376篇
  2009年   382篇
  2008年   349篇
  2007年   304篇
  2006年   266篇
  2005年   214篇
  2004年   181篇
  2003年   140篇
  2002年   131篇
  2001年   87篇
  2000年   87篇
  1999年   74篇
  1998年   65篇
  1997年   60篇
  1996年   50篇
  1995年   64篇
  1994年   38篇
  1993年   33篇
  1992年   19篇
  1991年   25篇
  1990年   22篇
  1989年   21篇
  1988年   16篇
  1987年   15篇
  1986年   15篇
  1985年   16篇
  1984年   14篇
  1983年   10篇
  1982年   21篇
  1981年   15篇
  1980年   5篇
  1979年   10篇
  1978年   8篇
  1977年   7篇
  1976年   5篇
  1973年   3篇
排序方式: 共有7226条查询结果,搜索用时 0 毫秒
71.
Regularization is a well-known technique in statistics for model estimation which is used to improve the generalization ability of the estimated model. Some of the regularization methods can also be used for variable selection that is especially useful in high-dimensional problems. This paper studies the use of regularized model learning in estimation of distribution algorithms (EDAs) for continuous optimization based on Gaussian distributions. We introduce two approaches to the regularized model estimation and analyze their effect on the accuracy and computational complexity of model learning in EDAs. We then apply the proposed algorithms to a number of continuous optimization functions and compare their results with other Gaussian distribution-based EDAs. The results show that the optimization performance of the proposed RegEDAs is less affected by the increase in the problem size than other EDAs, and they are able to obtain significantly better optimization values for many of the functions in high-dimensional settings.  相似文献   
72.
The Grid Virtual Organization (VO) “Theophys”, associated to the INFN (Istituto Nazionale di Fisica Nucleare), is a theoretical physics community with various computational demands, spreading from serial, SMP, MPI and hybrid jobs. That has led, in the past 20 years, towards the use of the Grid infrastructure for serial jobs, while the execution of multi-threaded, MPI and hybrid jobs has been performed in several small-medium size clusters installed in different sites, with access through standard local submission methods. This work analyzes the support for parallel jobs in the scientific Grid middlewares, then describes how the community unified the management of most of its computational need (serial and parallel ones) using the Grid through the development of a specific project which integrates serial e parallel resources in a common Grid based framework. A centralized national cluster is deployed inside this framework, providing “Wholenodes” reservations, CPU affinity, and other new features supporting our High Performance Computing (HPC) applications in the Grid environment. Examples of the cluster performance for relevant parallel applications in theoretical physics are reported, focusing on the different kinds of parallel jobs that can be served by the new features introduced in the Grid.  相似文献   
73.
Cloud computing is posing several challenges, such as security, fault tolerance, access interface singularity, and network constraints, both in terms of latency and bandwidth. In this scenario, the performance of communications depends both on the network fabric and its efficient support in virtualized environments, which ultimately determines the overall system performance. To solve the current network constraints in cloud services, their providers are deploying high-speed networks, such as 10 Gigabit Ethernet. This paper presents an evaluation of high-performance computing message-passing middleware on a cloud computing infrastructure, Amazon EC2 cluster compute instances, equipped with 10 Gigabit Ethernet. The analysis of the experimental results, confronted with a similar testbed, has shown the significant impact that virtualized environments still have on communication performance, which demands more efficient communication middleware support to get over the current cloud network limitations.  相似文献   
74.
This report describes the design of a modular, massive-parallel, neural-network (NN)-based vector quantizer for real-time video coding. The NN is a self-organizing map (SOM) that works only in the training phase for codebook generation, only at the recall phase for real-time image coding, or in both phases for adaptive applications. The neural net can be learned using batch or adaptive training and is controlled by an inside circuit, finite-state machine-based hard controller. The SOM is described in VHDL and implemented on electrically (FPGA) and mask (standard-cell) programmable devices.  相似文献   
75.
We present a new Web services-based framework for building componentized digital libraries (DLs). We particularly demonstrate how traditional RDBMS technology can be easily deployed to support several common digital library services. Configuration and customization of the framework to build specialized systems is supported by a wizard-like tool which is based on a generic metamodel for DLs. Such a tool implements a workflow process that segments the DL design tasks into well-defined steps and drives the designer along these steps. Both the framework and the configuration tool are evaluated in terms of several performance and usability criteria. Our experimental evaluation demonstrates the feasibility and superior performance of our framework, as well as the effectiveness of the wizard tool for setting up DLs.  相似文献   
76.
77.
Although 9-anilinoacridines are among the best studied antitumoral intercalators, there are few studies about the effect of isosteric substitution of a benzene moiety for a heterocycle ring in the acridine framework. According to these studies, this approach may lead to effective cytotoxic agents, but good cytotoxic activity depends on structural requirements in the aniline ring which differ from those in 9-anilinoacridines. The present paper deals with molecular modeling studies of some 9-anilino substituted tricyclic compounds and their intercalation complexes (in various DNA sequences) resulting from docking the compounds into various DNA sequences. As expected, the isosteric substitution in 9-anilinoacridines influences the LUMO energy values and orbital distribution, the dipole moment, electrostatic charges and the conformation of the anilino ring. Other important differences are observed during the docking studies, for example, changes in the spatial arrangement of the tricyclic nucleus and the anilino ring at the intercalation site. Semiempirical calculations of the intercalation complexes show that the isosteric replacement of a benzene ring in the acridine nucleus affects not only DNA affinity but also base pair selectivity. These findings explain, at least partially, the different structural requirements observed in several 9-anilino substituted tricyclic compounds for cytotoxic activity. Thus, the data presented here may guide the rational design of new agents with different DNA binding properties and/or a cytotoxic profile by isosteric substitution of known intercalators.  相似文献   
78.
Focused crawlers have as their main goal to crawl Web pages that are relevant to a specific topic or user interest, playing an important role for a great variety of applications. In general, they work by trying to find and crawl all kinds of pages deemed as related to an implicitly declared topic. However, users are often not simply interested in any document about a topic, but instead they may want only documents of a given type or genre on that topic to be retrieved. In this article, we describe an approach to focused crawling that exploits not only content-related information but also genre information present in Web pages to guide the crawling process. This approach has been designed to address situations in which the specific topic of interest can be expressed by specifying two sets of terms, the first describing genre aspects of the desired pages and the second related to the subject or content of these pages, thus requiring no training or any kind of preprocessing. The effectiveness, efficiency and scalability of the proposed approach are demonstrated by a set of experiments involving the crawling of pages related to syllabi of computer science courses, job offers in the computer science field and sale offers of computer equipments. These experiments show that focused crawlers constructed according to our genre-aware approach achieve levels of F1 superior to 88%, requiring the analysis of no more than 65% of the visited pages in order to find 90% of the relevant pages. In addition, we experimentally analyze the impact of term selection on our approach and evaluate a proposed strategy for semi-automatic generation of such terms. This analysis shows that a small set of terms selected by an expert or a set of terms specified by a typical user familiar with the topic is usually enough to produce good results and that such a semi-automatic strategy is very effective in supporting the task of selecting the sets of terms required to guide a crawling process.  相似文献   
79.
Recently, multi-objective evolutionary algorithms have been applied to improve the difficult tradeoff between interpretability and accuracy of fuzzy rule-based systems. It is known that both requirements are usually contradictory, however, these kinds of algorithms can obtain a set of solutions with different trade-offs. This contribution analyzes different application alternatives in order to attain the desired accuracy/interpr-etability balance by maintaining the improved accuracy that a tuning of membership functions could give but trying to obtain more compact models. In this way, we propose the use of multi-objective evolutionary algorithms as a tool to get almost one improved solution with respect to a classic single objective approach (a solution that could dominate the one obtained by such algorithm in terms of the system error and number of rules). To do that, this work presents and analyzes the application of six different multi-objective evolutionary algorithms to obtain simpler and still accurate linguistic fuzzy models by performing rule selection and a tuning of the membership functions. The results on two different scenarios show that the use of expert knowledge in the algorithm design process significantly improves the search ability of these algorithms and that they are able to improve both objectives together, obtaining more accurate and at the same time simpler models with respect to the single objective based approach.
María José Gacto (Corresponding author)Email:
Rafael AlcaláEmail:
Francisco HerreraEmail:
  相似文献   
80.
Summary Distributed Mutual Exclusion algorithms have been mainly compared using the number of messages exchanged per critical section execution. In such algorithms, no attention has been paid to the serialization order of the requests. Indeed, they adopt FCFS discipline. Conversely, the insertion of priority serialization disciplines, such as Short-Job-First, Head-Of-Line, Shortest-Remaining-Job-First etc., can be useful in many applications to optimize some performance indices. However, such priority disciplines are prone to starvation. The goal of this paper is to investigate and evaluate the impact of the insertion of a priority discipline in Maekawa-type algorithms. Priority serialization disciplines will be inserted by means of agated batch mechanism which avoids starvation. In a distributed algorithm, such a mechanism needs synchronizations among the processes. In order to highlight the usefulness of the priority based serialization discipline, we show how it can be used to improve theaverage response time compared to the FCFS discipline. The gated batch approach exhibits other advantages: algorithms are inherently deadlock-free and messages do not need to piggyback timestamps. We also show that, under heavy demand, algorithms using gated batch exchange less messages than Maekawa-type algorithms per critical section excution. Roberto Baldoni was born in Rome on February 1, 1965. He received the Laurea degree in electronic engineering in 1990 from the University of Rome La Sapienza and the Ph.D. degree in Computer Science from the University of Rome La Sapienza in 1994. Currently, he is a researcher in computer science at IRISA, Rennes (France). His research interests include operating systems, distributed algorithms, network protocols and real-time multimedia applications. Bruno Ciciani received the Laurea degree in electronic engineering in 1980 from the University of Rome La Sapienza. From 1983 to 1991 he has been a researcher at the University of Rome Tor Vergata. He is currently full professor in Computer Science at the University of Rome La Sapienza. His research activities include distributed computer systems, fault-tolerant computing, languages for parallel processing, and computer system performance and reliability evaluation. He has published in IEEE Trans. on Computers, IEEE Trans. on Knowledge and Data Engineering, IEEE Trans. on Software Engineering and IEEE Trans. on Reliability. He is the author of a book titled Manufactoring Yield Evaluation of VLSI/WSI Systems to be published by IEEE Computer Society Press.This research was supported in part by the Consiglio Nazionale delle Ricerche under grant 93.02294.CT12This author is also supported by a grant of the Human Capital and Mobility project of the European Community under contract No. 3702 CABERNET  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号