首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   88360篇
  免费   1379篇
  国内免费   664篇
工业技术   90403篇
  2023年   48篇
  2022年   104篇
  2021年   193篇
  2020年   152篇
  2019年   161篇
  2018年   14592篇
  2017年   13501篇
  2016年   10126篇
  2015年   767篇
  2014年   445篇
  2013年   491篇
  2012年   3398篇
  2011年   9612篇
  2010年   8438篇
  2009年   5678篇
  2008年   6929篇
  2007年   7940篇
  2006年   262篇
  2005年   1299篇
  2004年   1345篇
  2003年   1615篇
  2002年   1191篇
  2001年   663篇
  2000年   384篇
  1999年   141篇
  1998年   76篇
  1997年   43篇
  1996年   60篇
  1995年   25篇
  1994年   30篇
  1993年   13篇
  1992年   15篇
  1991年   32篇
  1988年   11篇
  1969年   24篇
  1968年   43篇
  1967年   33篇
  1966年   42篇
  1965年   44篇
  1964年   11篇
  1963年   28篇
  1962年   22篇
  1961年   18篇
  1960年   30篇
  1959年   35篇
  1958年   37篇
  1957年   36篇
  1956年   34篇
  1955年   63篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Multiversion databases store both current and historical data. Rows are typically annotated with timestamps representing the period when the row is/was valid. We develop novel techniques to reduce index maintenance in multiversion databases, so that indexes can be used effectively for analytical queries over current data without being a heavy burden on transaction throughput. To achieve this end, we re-design persistent index data structures in the storage hierarchy to employ an extra level of indirection. The indirection level is stored on solid-state disks that can support very fast random I/Os, so that traversing the extra level of indirection incurs a relatively small overhead. The extra level of indirection dramatically reduces the number of magnetic disk I/Os that are needed for index updates and localizes maintenance to indexes on updated attributes. Additionally, we batch insertions within the indirection layer in order to reduce physical disk I/Os for indexing new records. In this work, we further exploit SSDs by introducing novel DeltaBlock techniques for storing the recent changes to data on SSDs. Using our DeltaBlock, we propose an efficient method to periodically flush the recently changed data from SSDs to HDDs such that, on the one hand, we keep track of every change (or delta) for every record, and, on the other hand, we avoid redundantly storing the unchanged portion of updated records. By reducing the index maintenance overhead on transactions, we enable operational data stores to create more indexes to support queries. We have developed a prototype of our indirection proposal by extending the widely used generalized search tree open-source project, which is also employed in PostgreSQL. Our working implementation demonstrates that we can significantly reduce index maintenance and/or query processing cost by a factor of 3. For the insertion of new records, our novel batching technique can save up to 90 % of the insertion time. For updates, our prototype demonstrates that we can significantly reduce the database size by up to 80 % even with a modest space allocated for DeltaBlocks on SSDs.  相似文献   
992.
Analytical workloads in data warehouses often include heavy joins where queries involve multiple fact tables in addition to the typical star-patterns, dimensional grouping and selections. In this paper we propose a new processing and storage framework called bitwise dimensional co-clustering (BDCC) that avoids replication and thus keeps updates fast, yet is able to accelerate all these foreign key joins, efficiently support grouping and pushes down most dimensional selections. The core idea of BDCC is to cluster each table on a mix of dimensions, each possibly derived from attributes imported over an incoming foreign key and this way creating foreign key connected tables with partially shared clusterings. These are later used to accelerate any join between two tables that have some dimension in common and additionally permit to push down and propagate selections (reduce I/O) and accelerate aggregation and ordering operations. Besides the general framework, we describe an algorithm to derive such a physical co-clustering database automatically and describe query processing and query optimization techniques that can easily be fitted into existing relational engines. We present an experimental evaluation on the TPC-H benchmark in the Vectorwise system, showing that co-clustering can significantly enhance its already high performance and at the same time significantly reduce the memory consumption of the system.  相似文献   
993.
Bit-vectors are widely used for indexing and summarizing data due to their efficient processing in modern computers. Sparse bit-vectors can be further compressed to reduce their space requirement. Special compression schemes based on run-length encoders have been designed to avoid explicit decompression and minimize the decoding overhead during query execution. Moreover, highly compressed bit-vectors can exhibit a faster query time than the non-compressed ones. However, for hard-to-compress bit-vectors, compression does not speed up queries and can add considerable overhead. In these cases, bit-vectors are often stored verbatim (non-compressed). On the other hand, queries are answered by executing a cascade of bit-wise operations involving indexed bit-vectors and intermediate results. Often, even when the original bit-vectors are hard to compress, the intermediate results become sparse. It could be feasible to improve query performance by compressing these bit-vectors as the query is executed. In this scenario, it would be necessary to operate verbatim and compressed bit-vectors together. In this paper, we propose a hybrid framework where compressed and verbatim bitmaps can coexist and design algorithms to execute queries under this hybrid model. Our query optimizer is able to decide at run time when to compress or decompress a bit-vector. Our heuristics show that the applications using higher-density bitmaps can benefit from using this hybrid model, improving both their query time and memory utilization.  相似文献   
994.
State-of-the-art distributed RDF systems partition data across multiple computer nodes (workers). Some systems perform cheap hash partitioning, which may result in expensive query evaluation. Others try to minimize inter-node communication, which requires an expensive data preprocessing phase, leading to a high startup cost. Apriori knowledge of the query workload has also been used to create partitions, which, however, are static and do not adapt to workload changes. In this paper, we propose AdPart, a distributed RDF system, which addresses the shortcomings of previous work. First, AdPart applies lightweight partitioning on the initial data, which distributes triples by hashing on their subjects; this renders its startup overhead low. At the same time, the locality-aware query optimizer of AdPart takes full advantage of the partitioning to (1) support the fully parallel processing of join patterns on subjects and (2) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. Second, AdPart monitors the data access patterns and dynamically redistributes and replicates the instances of the most frequent ones among workers. As a result, the communication cost for future queries is drastically reduced or even eliminated. To control replication, AdPart implements an eviction policy for the redistributed patterns. Our experiments with synthetic and real data verify that AdPart: (1) starts faster than all existing systems; (2) processes thousands of queries before other systems become online; and (3) gracefully adapts to the query load, being able to evaluate queries on billion-scale RDF data in subseconds.  相似文献   
995.
996.
This paper studies the problem of how to conduct external sorting on flash drives while avoiding intermediate writes to the disk. The focus is on sort in portable electronic devices, where relations are only larger than the main memory by a small factor, and on sort as part of distributed processes where relations are frequently partially sorted. In such cases, sort algorithms that refrain from writing intermediate results to the disk have three advantages over algorithms that perform intermediate writes. First, on devices in which read operations are much faster than writes, such methods are efficient and frequently outperform Merge Sort. Secondly, they reduce flash cell degradation caused by writes. Thirdly, they can be used in cases where there is not enough disk space for the intermediate results. Novel sort algorithms that avoid intermediate writes to the disk are presented. An experimental evaluation, on different flash storage devices, shows that in many cases the new algorithms can extend the lifespan of the devices by avoiding unnecessary writes to the disk, while maintaining efficiency, in comparison with Merge Sort.  相似文献   
997.
As telecommunication networks grow in size and complexity, monitoring systems need to scale up accordingly. Alarm data generated in a large network are often highly correlated. These correlations can be explored to simplify the process of network fault management, by reducing the number of alarms presented to the network-monitoring operator. This makes it easier to react to network failures. But in some scenarios, it is highly desired to prevent the occurrence of these failures by predicting the occurrence of alarms before hand. This work investigates the usage of data mining methods to generate knowledge from historical alarm data, and using such knowledge to train a machine learning system, in order to predict the occurrence of the most relevant alarms in the network. The learning system was designed to be retrained periodically in order to keep an updated knowledge base.  相似文献   
998.
The ideal of Bessel-Fourier moments (BFMs) for image analysis and only rotation invariant image cognition has been proposed recently. In this paper, we extend the previous work and propose a new method for rotation, scaling and translation (RST) invariant texture recognition using Bessel-Fourier moments. Compared with the others moments based methods, the radial polynomials of Bessel-Fourier moments have more zeros and these zeros are more evenly distributed. It makes Bessel-Fourier moments more suitable for invariant texture recognition as a generalization of orthogonal complex moments. In the experiment part, we got three testing sets of 16, 24 and 54 texture images by way of translating, rotating and scaling them separately. The correct classification percentages (CCPs) are compared with that of orthogonal Fourier-Mellin moments and Zernike moments based methods in both noise-free and noisy condition. Experimental results validate the conclusion of theoretical derivation: BFM performs better in recognition capability and noise robustness in terms of RST texture recognition under both noise-free and noisy condition when compared with orthogonal Fourier-Mellin moments and Zernike moments based methods.  相似文献   
999.
1000.
Topology Optimization in Aircraft and Aerospace Structures Design   总被引:1,自引:0,他引:1  
Topology optimization has become an effective tool for least-weight and performance design, especially in aeronautics and aerospace engineering. The purpose of this paper is to survey recent advances of topology optimization techniques applied in aircraft and aerospace structures design. This paper firstly reviews several existing applications: (1) standard material layout design for airframe structures, (2) layout design of stiffener ribs for aircraft panels, (3) multi-component layout design for aerospace structural systems, (4) multi-fasteners design for assembled aircraft structures. Secondly, potential applications of topology optimization in dynamic responses design, shape preserving design, smart structures design, structural features design and additive manufacturing are introduced to provide a forward-looking perspective.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号