首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4701篇
  免费   357篇
  国内免费   11篇
工业技术   5069篇
  2024年   7篇
  2023年   43篇
  2022年   66篇
  2021年   180篇
  2020年   149篇
  2019年   161篇
  2018年   198篇
  2017年   176篇
  2016年   200篇
  2015年   175篇
  2014年   215篇
  2013年   355篇
  2012年   304篇
  2011年   364篇
  2010年   255篇
  2009年   271篇
  2008年   234篇
  2007年   212篇
  2006年   181篇
  2005年   145篇
  2004年   134篇
  2003年   108篇
  2002年   106篇
  2001年   80篇
  2000年   63篇
  1999年   55篇
  1998年   137篇
  1997年   84篇
  1996年   72篇
  1995年   52篇
  1994年   38篇
  1993年   41篇
  1992年   18篇
  1991年   22篇
  1990年   12篇
  1989年   15篇
  1988年   9篇
  1987年   8篇
  1986年   12篇
  1985年   19篇
  1984年   11篇
  1983年   10篇
  1982年   12篇
  1981年   10篇
  1980年   6篇
  1979年   4篇
  1977年   10篇
  1976年   13篇
  1975年   7篇
  1974年   2篇
排序方式: 共有5069条查询结果,搜索用时 15 毫秒
91.
This work addresses the matching of a 3D deformable face model to 2D images through a 2.5D Active Appearance Models (AAM). We propose a 2.5D AAM that combines a 3D metric Point Distribution Model (PDM) and a 2D appearance model whose control points are defined by a full perspective projection of the PDM. The advantage is that, assuming a calibrated camera, 3D metric shapes can be retrieved from single view images. Two model fitting algorithms and their computational efficient approximations are proposed: the Simultaneous Forwards Additive (SFA) and the Normalization Forwards Additive (NFA), both based on the Lucas–Kanade framework. The SFA algorithm searches for shape and appearance parameters simultaneously whereas the NFA projects out the appearance from the error image and searches only for the shape parameters. SFA is therefore more accurate. Robust solutions for the SFA and NFA are also proposed in order to take into account the self-occlusion or partial occlusion of the face. Several performance evaluations for the SFA, NFA and theirs efficient approximations were performed. The experiments include evaluating the frequency of converge, the fitting performance in unseen data and the tracking performance in the FGNET Talking Face sequence. All results show that the 2.5D AAM can outperform both the 2D + 3D combined models and the 2D standard methods. The robust extensions to occlusion were tested on a synthetic sequence showing that the model can deal efficiently with large head rotation.  相似文献   
92.
Vehicular Ad Hoc Networks (VANETs) require mechanisms to authenticate messages, identify valid vehicles, and remove misbehaving vehicles. A public key infrastructure (PKI) can be used to provide these functionalities using digital certificates. However, if a vehicle is no longer trusted, its certificates have to be revoked and this status information has to be made available to other vehicles as soon as possible. In this paper, we propose a collaborative certificate status checking mechanism called COACH to efficiently distribute certificate revocation information in VANETs. In COACH, we embed a hash tree in each standard Certificate Revocation List (CRL). This dual structure is called extended-CRL. A node possessing an extended-CRL can respond to certificate status requests without having to send the complete CRL. Instead, the node can send a short response (less than 1 kB) that fits in a single UDP message. Obviously, the substructures included in the short responses are authenticated. This means that any node possessing an extended-CRL can produce short responses that can be authenticated (including Road Side Units or intermediate vehicles). We also propose an extension to the COACH mechanism called EvCOACH that is more efficient than COACH in scenarios with relatively low revocation rates per CRL validity period. To build EvCOACH, we embed an additional hash chain in the extended-CRL. Finally, by conducting a detailed performance evaluation, COACH and EvCOACH are proved to be reliable, efficient, and scalable.  相似文献   
93.
Malware is one of the main threats to the Internet security in general, and to commercial transactions in particular. However, given the high level of sophistication reached by malware (e.g. usage of encrypted payload and obfuscation techniques), malware detection tools and techniques still call for effective and efficient solutions. In this paper, we address a specific, dreadful, and widely diffused financial malware: Zeus.The contributions of this paper are manifold: first, we propose a technique to break the encrypted malware communications, extracting the keystream used to encrypt such communications; second, we provide a generalization of the proposed keystream extraction technique. Further, we propose Cronus, an IDS that specifically targets Zeus malware. The implementation of Cronus has been experimentally tested on a production network, and its high quality performance and effectiveness are discussed. Finally, we highlight some principles underlying malware—and Zeus in particular—that could pave the way for further investigation in this field.  相似文献   
94.
Pervious pavements are sustainable urban drainage systems already known as rainwater infiltration techniques which reduce runoff formation and diffuse pollution in cities. The present research is focused on the design and construction of an experimental parking area, composed of 45 pervious pavement parking bays. Every pervious pavement was experimentally designed to store rainwater and measure the levels of the stored water and its quality over time. Six different pervious surfaces are combined with four different geotextiles in order to test which materials respond better to the good quality of rainwater storage over time and under the specific weather conditions of the north of Spain. The aim of this research was to obtain a good performance of pervious pavements that offered simultaneously a positive urban service and helped to harvest rainwater with a good quality to be used for non potable demands.  相似文献   
95.
Web 2.0 provides user-friendly tools that allow persons to create and publish content online. User generated content often takes the form of short texts (e.g., blog posts, news feeds, snippets, etc). This has motivated an increasing interest on the analysis of short texts and, specifically, on their categorisation. Text categorisation is the task of classifying documents into a certain number of predefined categories. Traditional text classification techniques are mainly based on word frequency statistical analysis and have been proved inadequate for the classification of short texts where word occurrence is too small. On the other hand, the classic approach to text categorization is based on a learning process that requires a large number of labeled training texts to achieve an accurate performance. However labeled documents might not be available, when unlabeled documents can be easily collected. This paper presents an approach to text categorisation which does not need a pre-classified set of training documents. The proposed method only requires the category names as user input. Each one of these categories is defined by means of an ontology of terms modelled by a set of what we call proximity equations. Hence, our method is not category occurrence frequency based, but highly depends on the definition of that category and how the text fits that definition. Therefore, the proposed approach is an appropriate method for short text classification where the frequency of occurrence of a category is very small or even zero. Another feature of our method is that the classification process is based on the ability of an extension of the standard Prolog language, named Bousi~Prolog , for flexible matching and knowledge representation. This declarative approach provides a text classifier which is quick and easy to build, and a classification process which is easy for the user to understand. The results of experiments showed that the proposed method achieved a reasonably useful performance.  相似文献   
96.
97.
Hierarchical clustering is a stepwise clustering method usually based on proximity measures between objects or sets of objects from a given data set. The most common proximity measures are distance measures. The derived proximity matrices can be used to build graphs, which provide the basic structure for some clustering methods. We present here a new proximity matrix based on an entropic measure and also a clustering algorithm (LEGClust) that builds layers of subgraphs based on this matrix, and uses them and a hierarchical agglomerative clustering technique to form the clusters. Our approach capitalizes on both a graph structure and a hierarchical construction. Moreover, by using entropy as a proximity measure we are able, with no assumption about the cluster shapes, to capture the local structure of the data, forcing the clustering method to reflect this structure. We present several experiments on artificial and real data sets that provide evidence on the superior performance of this new algorithm when compared with competing ones.  相似文献   
98.
In this paper, we consider the single machine weighted tardiness scheduling problem with sequence-dependent setups. We present heuristic algorithms based on the beam search technique. These algorithms include classic beam search procedures, as well as the filtered and recovering variants. Previous beam search implementations use fixed beam and filter widths. We consider the usual fixed width algorithms, and develop new versions that use variable beam and filter widths.  相似文献   
99.
We present a method for deconvolution of images by means of an inversion of fractional powers of the Gaussian. The main feature of our model is the introduction of a regularizing term which is also a fractional power of the Laplacian. This term allows us to recover higher frequencies. The model is particularly useful to devise an algorithm for blind deconvolution. We will show, analyze and illustrate through examples the performance of this algorithm.
Vicente F. CandelaEmail:
  相似文献   
100.
Recommender systems arose with the goal of helping users search in overloaded information domains (like e-commerce, e-learning or Digital TV). These tools automatically select items (commercial products, educational courses, TV programs, etc.) that may be appealing to each user taking into account his/her personal preferences. The personalization strategies used to compare these preferences with the available items suffer from well-known deficiencies that reduce the quality of the recommendations. Most of the limitations arise from using syntactic matching techniques because they miss a lot of useful knowledge during the recommendation process. In this paper, we propose a personalization strategy that overcomes these drawbacks by applying inference techniques borrowed from the Semantic Web. Our approach reasons about the semantics of items and user preferences to discover complex associations between them. These semantic associations provide additional knowledge about the user preferences, and permit the recommender system to compare them with the available items in a more effective way. The proposed strategy is flexible enough to be applied in many recommender systems, regardless of their application domain. Here, we illustrate its use in AVATAR, a tool that selects appealing audiovisual programs from among the myriad available in Digital TV.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号