首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   91879篇
  免费   1827篇
  国内免费   489篇
工业技术   94195篇
  2024年   35篇
  2023年   186篇
  2022年   287篇
  2021年   603篇
  2020年   558篇
  2019年   691篇
  2018年   15015篇
  2017年   13890篇
  2016年   10517篇
  2015年   1023篇
  2014年   935篇
  2013年   1227篇
  2012年   3742篇
  2011年   10032篇
  2010年   8671篇
  2009年   5917篇
  2008年   6971篇
  2007年   7917篇
  2006年   256篇
  2005年   1298篇
  2004年   1218篇
  2003年   1221篇
  2002年   588篇
  2001年   121篇
  2000年   197篇
  1999年   80篇
  1998年   74篇
  1997年   44篇
  1996年   67篇
  1995年   27篇
  1994年   21篇
  1993年   23篇
  1992年   20篇
  1991年   38篇
  1988年   16篇
  1969年   24篇
  1968年   43篇
  1967年   34篇
  1966年   42篇
  1965年   44篇
  1963年   28篇
  1962年   22篇
  1961年   18篇
  1960年   30篇
  1959年   35篇
  1958年   37篇
  1957年   36篇
  1956年   34篇
  1955年   63篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 296 毫秒
991.
The problem of determining whether several finite automata accept a word in common is closely related to the well-studied membership problem in transformation monoids. We raise the issue of limiting the number of final states in the automata intersection problem. For automata with two final states, we show the problem to be \({\oplus}\)L-complete or NP-complete according to whether a nontrivial monoid other than a direct product of cyclic groups of order 2 is allowed in the automata. We further consider idempotent commutative automata and (Abelian, mainly) group automata with one, two, or three final states over a singleton or larger alphabet, elucidating (under the usual hypotheses on complexity classes) the complexity of the intersection nonemptiness and related problems in each case.  相似文献   
992.
Real-time services require reliable and fault tolerant communication networks to support their stringent Quality of Service requirements. Multi Topology Routing based IP Fast Re-route (MT-IPFRR) technologies provide seamless forwarding of IP packets during network failures by constructing virtual topologies (VTs) to re-route the disrupted traffic. Multiple Routing Configurations (MRC) is a widely studied MT-IPFRR technique. In this paper, we propose two heuristics, namely mMRC-1 and mMRC-2, to reduce the number of VTs required by the MRC to provide full coverage for single link/node failures, and hence, to decrease its operational complexity. Both heuristics are designed to construct more robust VTs against network partitioning by taking their topological characteristics into consideration. We perform extensive experiments on 3200 topologies with diverse structural properties using our automated topology generation and analysis tool. Numerical results show that the amount of reductions in VT requirements get higher up to 31.84 %, as the networks tend to have more hub nodes whose degree is much higher than the rest of the network.  相似文献   
993.
Partitioning skew has been shown to be a major issue that can significantly prolong the execution time of MapReduce jobs. Most of the existing off-line heuristics for partitioning skew mitigation are inefficient; they have to wait for the completion of all the map tasks. Some solutions can tackle this problem on-line, but will impose an additional overhead by repartitioning the workload of overloaded tasks. In this paper, we present OPTIMA, an on-line partitioning skew mitigation technique for MapReduce. OPTIMA predicts the workload distribution of reduce tasks at run-time, leverages the deviation detection technique to identify the overloaded tasks and pro-actively adjusts resource allocation for these tasks to reduce their execution time. We provide the upper bound of OPTIMA in time complexity, while allowing OPTIMA to perform totally on-line. Through experiments using both real and synthetic workloads running on an 11-node Hadoop cluster, we have observed OPTIMA can effectively mitigate the partitioning skew and improved the job completion time by up to 36.73 % in our experiments.  相似文献   
994.
This article presents a report on APNOMS2015, which was held August 19–21, 2015 in Busan, Korea. The theme of APNOMS2015 was “Managing a Very Connected World.”  相似文献   
995.
Shared protection/restoration is a promising solution for reducing protection resources and is supported at each layer of the current multi-layer networks. Software-defined networking is expected to reduce equipment cost as well as operational cost by orchestrating these shared protection functionalities. However, although protection resource sharing improves link utilization, it sometimes increases the required equipment. Meanwhile, traffic re-aggregation at each layer is an important technique for low volume traffic to utilize the underlying link capacity more efficiently, but re-aggregation also makes it difficult to share protection resources with traffic at lower layers. In this paper, we present multi-layer network design strategy and method that reduce equipment cost by means of both traffic re-aggregation at each layer and protection resource sharing among multiple service traffic at different layers. The strategy first prioritizes traffic re-aggregation at each layer, and then maximally delegates shared protection to lower layers as long as it does not increase the required capacity at the lower layer. Evaluation results from the example three-layer networks confirm that the proposed method can effectively reduce equipment cost compared to the conventional design method. Cost reduction is achieved by leveraging shared protection functions at multiple layers.  相似文献   
996.
In this paper, we discuss the communications reliability requirements posed by the smart power grid with a focus on communications in support of wide area situational awareness. Implementation of wide area situational awareness relies on both transmission substation networks and wide area optical networks. We study the reliability of a sample communications network of the California Power Grid and find that its reliability falls short of proposed requirements. To overcome this issue, we consider the problem of designing the substation network and the wide area network to meet the reliability requirements while minimizing the network cost. For the wide area network design problem, we propose two alternate design approaches, namely: (1) following the power lines and (2) a mesh based design interconnecting the nodes. For the first approach we develop two greedy iterative heuristics and a heuristic integer linear programming (H-ILP) model using minimum cut-sets for network reliability optimization. The greedy iterative algorithms outperform the H-ILP approach in terms of cost, but require a larger amount of computing resources. Both proposed models are in fact complementary and thus provide a framework to optimize the reliability of smart grid communications networks restricted to following the power lines. In the second approach a greenfield mesh network method is proposed based on starting with a minimum spanning tree which is then augmented through a greedy heuristic into a mesh. Comparative numerical results show that the reliable mesh design has advantages in terms of the number of links and total link distance needed.  相似文献   
997.
Multiversion databases store both current and historical data. Rows are typically annotated with timestamps representing the period when the row is/was valid. We develop novel techniques to reduce index maintenance in multiversion databases, so that indexes can be used effectively for analytical queries over current data without being a heavy burden on transaction throughput. To achieve this end, we re-design persistent index data structures in the storage hierarchy to employ an extra level of indirection. The indirection level is stored on solid-state disks that can support very fast random I/Os, so that traversing the extra level of indirection incurs a relatively small overhead. The extra level of indirection dramatically reduces the number of magnetic disk I/Os that are needed for index updates and localizes maintenance to indexes on updated attributes. Additionally, we batch insertions within the indirection layer in order to reduce physical disk I/Os for indexing new records. In this work, we further exploit SSDs by introducing novel DeltaBlock techniques for storing the recent changes to data on SSDs. Using our DeltaBlock, we propose an efficient method to periodically flush the recently changed data from SSDs to HDDs such that, on the one hand, we keep track of every change (or delta) for every record, and, on the other hand, we avoid redundantly storing the unchanged portion of updated records. By reducing the index maintenance overhead on transactions, we enable operational data stores to create more indexes to support queries. We have developed a prototype of our indirection proposal by extending the widely used generalized search tree open-source project, which is also employed in PostgreSQL. Our working implementation demonstrates that we can significantly reduce index maintenance and/or query processing cost by a factor of 3. For the insertion of new records, our novel batching technique can save up to 90 % of the insertion time. For updates, our prototype demonstrates that we can significantly reduce the database size by up to 80 % even with a modest space allocated for DeltaBlocks on SSDs.  相似文献   
998.
Analytical workloads in data warehouses often include heavy joins where queries involve multiple fact tables in addition to the typical star-patterns, dimensional grouping and selections. In this paper we propose a new processing and storage framework called bitwise dimensional co-clustering (BDCC) that avoids replication and thus keeps updates fast, yet is able to accelerate all these foreign key joins, efficiently support grouping and pushes down most dimensional selections. The core idea of BDCC is to cluster each table on a mix of dimensions, each possibly derived from attributes imported over an incoming foreign key and this way creating foreign key connected tables with partially shared clusterings. These are later used to accelerate any join between two tables that have some dimension in common and additionally permit to push down and propagate selections (reduce I/O) and accelerate aggregation and ordering operations. Besides the general framework, we describe an algorithm to derive such a physical co-clustering database automatically and describe query processing and query optimization techniques that can easily be fitted into existing relational engines. We present an experimental evaluation on the TPC-H benchmark in the Vectorwise system, showing that co-clustering can significantly enhance its already high performance and at the same time significantly reduce the memory consumption of the system.  相似文献   
999.
Bit-vectors are widely used for indexing and summarizing data due to their efficient processing in modern computers. Sparse bit-vectors can be further compressed to reduce their space requirement. Special compression schemes based on run-length encoders have been designed to avoid explicit decompression and minimize the decoding overhead during query execution. Moreover, highly compressed bit-vectors can exhibit a faster query time than the non-compressed ones. However, for hard-to-compress bit-vectors, compression does not speed up queries and can add considerable overhead. In these cases, bit-vectors are often stored verbatim (non-compressed). On the other hand, queries are answered by executing a cascade of bit-wise operations involving indexed bit-vectors and intermediate results. Often, even when the original bit-vectors are hard to compress, the intermediate results become sparse. It could be feasible to improve query performance by compressing these bit-vectors as the query is executed. In this scenario, it would be necessary to operate verbatim and compressed bit-vectors together. In this paper, we propose a hybrid framework where compressed and verbatim bitmaps can coexist and design algorithms to execute queries under this hybrid model. Our query optimizer is able to decide at run time when to compress or decompress a bit-vector. Our heuristics show that the applications using higher-density bitmaps can benefit from using this hybrid model, improving both their query time and memory utilization.  相似文献   
1000.
State-of-the-art distributed RDF systems partition data across multiple computer nodes (workers). Some systems perform cheap hash partitioning, which may result in expensive query evaluation. Others try to minimize inter-node communication, which requires an expensive data preprocessing phase, leading to a high startup cost. Apriori knowledge of the query workload has also been used to create partitions, which, however, are static and do not adapt to workload changes. In this paper, we propose AdPart, a distributed RDF system, which addresses the shortcomings of previous work. First, AdPart applies lightweight partitioning on the initial data, which distributes triples by hashing on their subjects; this renders its startup overhead low. At the same time, the locality-aware query optimizer of AdPart takes full advantage of the partitioning to (1) support the fully parallel processing of join patterns on subjects and (2) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. Second, AdPart monitors the data access patterns and dynamically redistributes and replicates the instances of the most frequent ones among workers. As a result, the communication cost for future queries is drastically reduced or even eliminated. To control replication, AdPart implements an eviction policy for the redistributed patterns. Our experiments with synthetic and real data verify that AdPart: (1) starts faster than all existing systems; (2) processes thousands of queries before other systems become online; and (3) gracefully adapts to the query load, being able to evaluate queries on billion-scale RDF data in subseconds.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号