首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1665篇
  免费   104篇
工业技术   1769篇
  2024年   5篇
  2023年   14篇
  2022年   57篇
  2021年   69篇
  2020年   52篇
  2019年   45篇
  2018年   80篇
  2017年   73篇
  2016年   62篇
  2015年   50篇
  2014年   60篇
  2013年   129篇
  2012年   122篇
  2011年   134篇
  2010年   108篇
  2009年   91篇
  2008年   96篇
  2007年   85篇
  2006年   49篇
  2005年   54篇
  2004年   33篇
  2003年   24篇
  2002年   30篇
  2001年   18篇
  2000年   20篇
  1999年   17篇
  1998年   44篇
  1997年   33篇
  1996年   25篇
  1995年   13篇
  1994年   10篇
  1993年   10篇
  1992年   5篇
  1991年   6篇
  1990年   5篇
  1989年   3篇
  1988年   6篇
  1987年   4篇
  1986年   2篇
  1985年   4篇
  1982年   3篇
  1981年   4篇
  1980年   4篇
  1979年   2篇
  1978年   1篇
  1977年   1篇
  1976年   3篇
  1974年   1篇
  1973年   1篇
  1972年   1篇
排序方式: 共有1769条查询结果,搜索用时 0 毫秒
21.
A cartographic-oriented model uses algebraic map operations to perform spatial analysis of medical data relative to the human body. A prototype system uses 3D visualization techniques to deliver analysis results. A prototype implementation suggests the model might provide the basis for a medical application tool that introduces new information insight.  相似文献   
22.
Mathematical morphology was originally conceived as a set theoretic approach for the processing of binary images. Extensions of classical binary morphology to gray-scale morphology include approaches based on fuzzy set theory. This paper discusses and compares several well-known and new approaches towards gray-scale and fuzzy mathematical morphology. We show in particular that a certain approach to fuzzy mathematical morphology ultimately depends on the choice of a fuzzy inclusion measure and on a notion of duality. This fact gives rise to a clearly defined scheme for classifying fuzzy mathematical morphologies. The umbra and the level set approach, an extension of the threshold approach to gray-scale mathematical morphology, can also be embedded in this scheme since they can be identified with certain fuzzy approaches.
Marcos Eduardo ValleEmail:
  相似文献   
23.
Focused crawlers have as their main goal to crawl Web pages that are relevant to a specific topic or user interest, playing an important role for a great variety of applications. In general, they work by trying to find and crawl all kinds of pages deemed as related to an implicitly declared topic. However, users are often not simply interested in any document about a topic, but instead they may want only documents of a given type or genre on that topic to be retrieved. In this article, we describe an approach to focused crawling that exploits not only content-related information but also genre information present in Web pages to guide the crawling process. This approach has been designed to address situations in which the specific topic of interest can be expressed by specifying two sets of terms, the first describing genre aspects of the desired pages and the second related to the subject or content of these pages, thus requiring no training or any kind of preprocessing. The effectiveness, efficiency and scalability of the proposed approach are demonstrated by a set of experiments involving the crawling of pages related to syllabi of computer science courses, job offers in the computer science field and sale offers of computer equipments. These experiments show that focused crawlers constructed according to our genre-aware approach achieve levels of F1 superior to 88%, requiring the analysis of no more than 65% of the visited pages in order to find 90% of the relevant pages. In addition, we experimentally analyze the impact of term selection on our approach and evaluate a proposed strategy for semi-automatic generation of such terms. This analysis shows that a small set of terms selected by an expert or a set of terms specified by a typical user familiar with the topic is usually enough to produce good results and that such a semi-automatic strategy is very effective in supporting the task of selecting the sets of terms required to guide a crawling process.  相似文献   
24.
Journal of Intelligent Manufacturing - Remanufacturing includes disassembly and reassembly of used products to save natural resources and reduce emissions. While assembly is widely understood in...  相似文献   
25.
Nanoparticles of copper/cuprous oxide (Cu/Cu2 O) were successfully synthesised by a green chemistry route. The synthesis process was carried out using an extract of Stachys lavandulifolia as both reducing and capping agents with a facile procedure. The nanoparticles were characterised by different techniques including X‐ray diffraction, indicating that the synthesised sample comprised both copper and cuprous oxide entity. The nanoparticles had a mean size of 80 nm and represented an impressive bactericidal effect on Pseudomonas aeruginosa.Inspec keywords: copper, copper compounds, nanoparticles, nanofabrication, nanomedicine, antibacterial activity, X‐ray diffractionOther keywords: nanoparticles synthesis, Stachys lavandulifolia, antibacterial activity, green chemistry route, reducing agents, capping agents, X‐ray diffraction, bactericidal effect, Pseudomonas aeruginosa, Cu‐Cu2 O  相似文献   
26.
As the dependence on mobile devices increases, the need for supporting a wider range of users and devices becomes crucial. Elders and people with disabilities adopt new technologies reluctantly, a tendency caused by the lack of adaptation of these technologies to their needs. To address this challenge, this paper describes a framework, Imhotep, whose aim is to aid developers in the accessible application creation process, making the creation of user-centered applications easier and faster. Our framework allows to easily adapt the applications to the constraints imposed by the user capabilities (sensorial, cognitive, and physical capabilities) and device capabilities by providing a repository that will manage the compilation and deployment of applications that include a set of preprocessor directives in the source code. These directives are enhanced with concepts that are automatically adjusted to the current trends of mobile devices by using a Fuzzy Knowledge-Eliciting Reasoner. Our final goal is to increase the number of applications targeted to elders and people with disabilities providing tools that facilitate their development. The paper also describes the evaluation of both the accuracy of the fuzzy terms generated for mobile devices and the usability of the proposed platform.  相似文献   
27.
This paper proposes a multi-section vector quantization approach for on-line signature recognition. We have used a database of 330 users which includes 25 skilled forgeries performed by 5 different impostors. This database is larger than those typically used in the literature. Nevertheless, we also provide results from the SVC database. Our proposed system obtains similar results as the state-of-the-art online signature recognition algorithm, Dynamic Time Warping, with a reduced computational requirement, around 47 times lower. In addition, our system improves the database storage requirements due to vector compression, and is more privacy-friendly because it is not possible to recover the original signature using the codebooks. Experimental results reveal that our proposed multi-section vector quantization achieves a 98% identification rate, minimum Detection Cost Function value equal to 2.29% for random forgeries and 7.75% for skilled forgeries.  相似文献   
28.
Most post-processors for boundary element (BE) analysis use an auxiliary domain mesh to display domain results, working against the profitable modelling process of a pure boundary discretization. This paper introduces a novel visualization technique which preserves the basic properties of the boundary element methods. The proposed algorithm does not require any domain discretization and is based on the direct and automatic identification of isolines. Another critical aspect of the visualization of domain results in BE analysis is the effort required to evaluate results in interior points. In order to tackle this issue, the present article also provides a comparison between the performance of two different BE formulations (conventional and hybrid). In addition, this paper presents an overview of the most common post-processing and visualization techniques in BE analysis, such as the classical algorithms of scan line and the interpolation over a domain discretization. The results presented herein show that the proposed algorithm offers a very high performance compared with other visualization procedures.  相似文献   
29.
Hub-and-spoke networks are widely studied in the area of location theory. They arise in several contexts, including passenger airlines, postal and parcel delivery, and computer and telecommunication networks. Hub location problems usually involve three simultaneous decisions to be made: the optimal number of hub nodes, their locations and the allocation of the non-hub nodes to the hubs. In the uncapacitated single allocation hub location problem (USAHLP) hub nodes have no capacity constraints and non-hub nodes must be assigned to only one hub. In this paper, we propose three variants of a simple and efficient multi-start tabu search heuristic as well as a two-stage integrated tabu search heuristic to solve this problem. With multi-start heuristics, several different initial solutions are constructed and then improved by tabu search, while in the two-stage integrated heuristic tabu search is applied to improve both the locational and allocational part of the problem. Computational experiments using typical benchmark problems (Civil Aeronautics Board (CAB) and Australian Post (AP) data sets) as well as new and modified instances show that our approaches consistently return the optimal or best-known results in very short CPU times, thus allowing the possibility of efficiently solving larger instances of the USAHLP than those found in the literature. We also report the integer optimal solutions for all 80 CAB data set instances and the 12 AP instances up to 100 nodes, as well as for the corresponding new generated AP instances with reduced fixed costs.  相似文献   
30.
Approximate data matching aims at assessing whether two distinct instances of data represent the same real-world object. The comparison between data values is usually done by applying a similarity function which returns a similarity score. If this score surpasses a given threshold, both data instances are considered as representing the same real-world object. These score values depend on the algorithm that implements the function and have no meaning to the user. In addition, score values generated by different functions are not comparable. This will potentially lead to problems when the scores returned by different similarity functions need to be combined for computing the similarity between records. In this article, we propose that thresholds should be defined in terms of the precision that is expected from the matching process rather than in terms of the raw scores returned by the similarity function. Precision is a widely known similarity metric and has a clear interpretation from the user's point of view. Our approach defines mappings from score values to precision values, which we call adjusted scores. In order to obtain such mappings, our approach requires training over a small dataset. Experiments show that training can be reused for different datasets on the same domain. Our results also demonstrate that existing methods for combining scores for computing the similarity between records may be enhanced if adjusted scores are used.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号