首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   368篇
  免费   103篇
  国内免费   65篇
工业技术   536篇
  2025年   12篇
  2024年   43篇
  2023年   42篇
  2022年   53篇
  2021年   37篇
  2020年   27篇
  2019年   20篇
  2018年   12篇
  2017年   20篇
  2016年   15篇
  2015年   21篇
  2014年   14篇
  2013年   20篇
  2012年   24篇
  2011年   26篇
  2010年   15篇
  2009年   17篇
  2008年   16篇
  2007年   13篇
  2006年   13篇
  2005年   7篇
  2004年   6篇
  2003年   8篇
  2002年   6篇
  2001年   6篇
  2000年   2篇
  1999年   1篇
  1998年   5篇
  1997年   1篇
  1996年   5篇
  1995年   3篇
  1994年   3篇
  1993年   3篇
  1992年   3篇
  1991年   2篇
  1989年   1篇
  1985年   3篇
  1984年   1篇
  1982年   1篇
  1981年   1篇
  1980年   1篇
  1979年   1篇
  1978年   2篇
  1977年   1篇
  1976年   1篇
  1975年   2篇
排序方式: 共有536条查询结果,搜索用时 15 毫秒
1.
This paper concerns the following problem: given a set of multi-attribute records, a fixed number of buckets and a two-disk system, arrange the records into the buckets and then store the buckets between the disks in such a way that, over all possible orthogonal range queries (ORQs), the disk access concurrency is maximized. We shall adopt the multiple key hashing (MKH) method for arranging records into buckets and use the disk modulo (DM) allocation method for storing buckets onto disks. Since the DM allocation method has been shown to be superior to any other allocation methods for allocating an MKH file onto a two-disk system for answering ORQs, the real issue is knowing how to determine an optimal way for organizing the records into buckets based upon the MKH concept.

A performance formula that can be used to evaluate the average response time, over all possible ORQs, of an MKH file in a two-disk system using the DM allocation method is first presented. Based upon this formula, it is shown that our design problem is related to a notoriously difficult problem, namely the Prime Number Problem. Then a performance lower bound and an efficient algorithm for designing optimal MKH files in certain cases are presented. It is pointed out that in some cases the optimal MKH file for ORQs in a two-disk system using the DM allocation method is identical to the optimal MKH file for ORQs in a single-disk system and the optimal average response time in a two-disk system is slightly greater than one half of that in a single-disk system.  相似文献   

2.
3.
4.
Conventional image hash functions only exploit luminance components of color images to generate robust hashes and then lead to limited discriminative capacities. In this paper, we propose a robust image hash function for color images, which takes all components of color images into account and achieves good discrimination. Firstly, the proposed hash function re-scales the input image to a fixed size. Secondly, it extracts local color features by converting the RGB color image into HSI and YCbCr color spaces and calculating the block mean and variance from each component of the HSI and YCbCr representations. Finally, it takes the Euclidian distances between the block features and a reference feature as hash values. Experiments are conducted to validate the efficiency of our hash function. Receiver operating characteristics (ROC) curve comparisons with two existing algorithms demonstrate that our hash function outperforms the assessed algorithms in classification performances between perceptual robustness and discriminative capability.  相似文献   
5.
脑科学是当今国际科技研究的前沿邻域,而对高精度脑成像数据进行可视化是脑神经科学在结构成像方面的基础性需求。针对高精度脑成像数据可视化过程中存在的数据量大以及绘制效率低的问题,提出了基于分类分层矢量量化和完美空间哈希相结合的压缩域可视化方法。首先对体数据进行分块,记录每块的平均值并依据块内体数据的平均梯度值是否为0进行分类;其次运用分层矢量量化对平均梯度值不为0的块进行压缩;然后用分块完美空间哈希技术存储压缩得到两个索引值;最后对上面的压缩体数据进行解码得到恢复体数据,采用分块完美空间哈希对原始体数据与恢复体数据作差得到的残差数据进行压缩。绘制时,只需将压缩得到的数据作为纹理加载到GPU内,即可在GPU内完成实时解压缩绘制。实验结果表明,在保证较好图像重构质量的前提下,该算法减少了数据的存储空间,提高了体可视化的绘制效率,从而可以在单机上处理较大的数据。  相似文献   
6.
电网公司的用电信息采集系统具有数据量大、在线终端多、通道类型多和应用场景复杂等特点,传统的负载均衡的方式,在增加前置机或前置机故障时,终端协议处理都会发生大规模的迁移。文章提出了采用硬件负载均衡器实现通信负载均衡,采用一致哈希算法实现采集前置机的负载均衡;对哈希算法进行了分析比较,选择CRC32查表法,保证了哈希算法的高效、单调和均匀分布,在前置机发生变化时,仅有少数终端发生迁移,不发生终端大规模的迁移,确保负载均衡,保证了系统的稳定性和可靠性。  相似文献   
7.
    
In many-task computing (MTC), applications such as scientific workflows or parameter sweeps communicate via intermediate files; application performance strongly depends on the file system in use. The state of the art uses runtime systems providing in-memory file storage that is designed for data locality: files are placed on those nodes that write or read them. With data locality, however, task distribution conflicts with data distribution, leading to application slowdown, and worse, to prohibitive storage imbalance. To overcome these limitations, we present MemFS, a fully symmetrical, in-memory runtime file system that stripes files across all compute nodes, based on a distributed hash function. Our cluster experiments with Montage and BLAST workflows, using up to 512 cores, show that MemFS has both better performance and better scalability than the state-of-the-art, locality-based file system, AMFS. Furthermore, our evaluation on a public commercial cloud validates our cluster results. On this platform MemFS shows excellent scalability up to 1024 cores and is able to saturate the 10G Ethernet bandwidth when running BLAST and Montage.  相似文献   
8.
以前基于支持度一置信度框架的关联规则挖掘算法都是先用支持度做为阈值对搜索结果进行剪枝 ,产生频繁集 ,再针对频繁集产生关联规则 ,这就是频繁关联规则。然而在很多应用中 ,诸如 :鉴别相似的Web文件、网络中入侵检测等 ,有许多有趣的关联规则仅有很少的支持度。在本文中 ,针对这种情况 ,提出了一种可以挖掘非频繁项之间有趣规则的算法 ,此算法先用相似度作为兴趣度度量对算法结果进行剪枝  相似文献   
9.
A better similarity index structure for high-dimensional feature datapoints is very desirable for building scalable content-based search systems on feature-rich dataset. In this paper, we introduce sparse principal component analysis (Sparse PCA) and Boosting Similarity Sensitive Hashing (Boosting SSC) into traditional spectral hashing for both effective and data-aware binary coding for real data. We call this Sparse Spectral Hashing (SSH). SSH formulates the problem of binary coding as a thresholding a subset of eigenvectors of the Laplacian graph by constraining the number of nonzero features. The convex relaxation and eigenfunction learning are conducted in SSH to make the coding globally optimal and effective to datapoints outside the training data. The comparisons in terms of F1 score and AUC show that SSH outperforms other methods substantially over both image and text datasets.  相似文献   
10.
The problem of efficiently finding similar items in a large corpus of high-dimensional data points arises in many real-world tasks, such as music, image, and video retrieval. Beyond the scaling difficulties that arise with lookups in large data sets, the complexity in these domains is exacerbated by an imprecise definition of similarity. In this paper, we describe a method to learn a similarity function from only weakly labeled positive examples. Once learned, this similarity function is used as the basis of a hash function to severely constrain the number of points considered for each lookup. Tested on a large real-world audio dataset, only a tiny fraction of the points (~0.27%) are ever considered for each lookup. To increase efficiency, no comparisons in the original high-dimensional space of points are required. The performance far surpasses, in terms of both efficiency and accuracy, a state-of-the-art Locality-Sensitive-Hashing-based (LSH) technique for the same problem and data set.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号

京公网安备 11010802026262号