首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   287篇
  免费   75篇
  国内免费   90篇
工业技术   452篇
  2024年   14篇
  2023年   33篇
  2022年   48篇
  2021年   32篇
  2020年   21篇
  2019年   14篇
  2018年   10篇
  2017年   19篇
  2016年   14篇
  2015年   18篇
  2014年   14篇
  2013年   20篇
  2012年   23篇
  2011年   25篇
  2010年   14篇
  2009年   16篇
  2008年   16篇
  2007年   12篇
  2006年   13篇
  2005年   7篇
  2004年   6篇
  2003年   8篇
  2002年   6篇
  2001年   6篇
  2000年   2篇
  1999年   1篇
  1998年   5篇
  1997年   1篇
  1996年   5篇
  1995年   3篇
  1994年   3篇
  1993年   3篇
  1992年   3篇
  1991年   2篇
  1989年   1篇
  1985年   3篇
  1984年   1篇
  1982年   1篇
  1981年   1篇
  1980年   1篇
  1979年   1篇
  1978年   2篇
  1977年   1篇
  1976年   1篇
  1975年   2篇
排序方式: 共有452条查询结果,搜索用时 15 毫秒
1.
This paper concerns the following problem: given a set of multi-attribute records, a fixed number of buckets and a two-disk system, arrange the records into the buckets and then store the buckets between the disks in such a way that, over all possible orthogonal range queries (ORQs), the disk access concurrency is maximized. We shall adopt the multiple key hashing (MKH) method for arranging records into buckets and use the disk modulo (DM) allocation method for storing buckets onto disks. Since the DM allocation method has been shown to be superior to any other allocation methods for allocating an MKH file onto a two-disk system for answering ORQs, the real issue is knowing how to determine an optimal way for organizing the records into buckets based upon the MKH concept.

A performance formula that can be used to evaluate the average response time, over all possible ORQs, of an MKH file in a two-disk system using the DM allocation method is first presented. Based upon this formula, it is shown that our design problem is related to a notoriously difficult problem, namely the Prime Number Problem. Then a performance lower bound and an efficient algorithm for designing optimal MKH files in certain cases are presented. It is pointed out that in some cases the optimal MKH file for ORQs in a two-disk system using the DM allocation method is identical to the optimal MKH file for ORQs in a single-disk system and the optimal average response time in a two-disk system is slightly greater than one half of that in a single-disk system.  相似文献   

2.
A. Chin 《Algorithmica》1994,12(2-3):170-181
Consider the problem of efficiently simulating the shared-memory parallel random access machine (PRAM) model on massively parallel architectures with physically distributed memory. To prevent network congestion and memory bank contention, it may be advantageous to hash the shared memory address space. The decision on whether or not to use hashing depends on (1) the communication latency in the network and (2) the locality of memory accesses in the algorithm.We relate this decision directly to algorithmic issues by studying the complexity of hashing in the Block PRAM model of Aggarwal, Chandra, and Snir, a shared-memory model of parallel computation which accounts for communication locality. For this model, we exhibit a universal family of hash functions having optimal locality. The complexity of applying these hash functions to the shared address space of the Block PRAM (i.e., by permuting data elements) is asymptotically equivalent to the complexity of performing a square matrix transpose, and this result is best possible for all pairwise independent universal hash families. These complexity bounds provide theoretical evidence that hashing and randomized routing need not destroy communication locality, addressing an open question of Valiant.This work was started when the author was a student at Oxford University, supported by a National Science Foundation Graduate Fellowship and a Rhodes Scholarship. Any opinions, findings, conclusions, or recommendations expressed in this publication are those of the author and do not necessarily reflect the views of the National Science Foundation or the Rhodes Trust.  相似文献   
3.
在分布式环境中,为提高资源利用率和网页抓取效率,提出一种基于优先级队列的分布式多主题爬虫调度算法PQ‐MCSA。利用基于缓存的扩展式哈希算法对整体任务集进行切割,按照URL逻辑二级节点哈希映射法,将分割后的子任务集均匀地分配到各处理节点中;利用单处理节点的计算能力结合构建的任务优先级队列进行不同主题任务的调度。该算法改善了传统分布式爬虫对单节点的处理资源调度不充分、多主题任务爬取不均匀等缺点。实际项目的应用结果表明,使用该方法能够有效地提高各主题爬取结果的均衡度,具有较强的实用性。  相似文献   
4.
Multimedia-based hashing is considered an important technique for achieving authentication and copy detection in digital contents. However, 3D model hashing has not been as widely used as image or video hashing. In this study, we develop a robust 3D mesh-model hashing scheme based on a heat kernel signature (HKS) that can describe a multi-scale shape curve and is robust against isometric modifications. We further discuss the robustness, uniqueness, security, and spaciousness of the method for 3D model hashing. In the proposed hashing scheme, we calculate the local and global HKS coefficients of vertices through time scales and 2D cell coefficients by clustering HKS coefficients with variable bin sizes based on an estimated L2 risk function, and generate the binary hash through binarization of the intermediate hash values by combining the cell values and the random values. In addition, we use two parameters, bin center points and cell amplitudes, which are obtained through an iterative refinement process, to improve the robustness, uniqueness, security, and spaciousness further, and combine them in a hash with a key. By evaluating the robustness, uniqueness, and spaciousness experimentally, and through a security analysis based on the differential entropy, we verify that our hashing scheme outperforms conventional hashing schemes.  相似文献   
5.
用于图像Hash的视觉相似度客观评价测度   总被引:2,自引:0,他引:2       下载免费PDF全文
由于评价图像Hash性能时,要求对两幅图像是否在感知上相似做出判断,因此针对这一需求,提出了一种衡量感知相似程度的评价测度。该测度的确定是先对图像进行低通滤波,再进行图像重叠分块;然后运用相关系数检测法计算每一对分块的相似程度,并对相似系数归一化,再分别计算若干个最小和最大的归一化相似系数的乘积;最后用最小相似系数乘积与最大相似系数乘积的比值作为感知相似性的测度。实验结果表明,该测度不仅可有效反映图像视觉质量的变化,而且能较好地区分两幅图像是否存在重要的视觉差异,其对感知相似进行评价的性能优于峰值信噪比。  相似文献   
6.
已有的无监督跨模态哈希(UCMH)方法主要关注构造相似矩阵和约束公共表征空间的结构,忽略了2个重要问题:一是它们为不同模态的数据提取独立的表征用以检索,没有考虑不同模态之间的信息互补;二是预提取特征的结构信息不完全适用于跨模态检索任务,可能会造成一些错误信息的迁移。针对第一个问题,提出一种多模态表征融合结构,通过对不同模态的嵌入特征进行融合,从而有效地综合来自不同模态的信息,提高哈希码的表达能力,同时引入跨模态生成机制,解决检索数据模态缺失的问题;针对第二个问题,提出一种相似矩阵动态调整策略,在训练过程中用学到的模态嵌入自适应地逐步优化相似矩阵,减轻预提取特征对原始数据集的偏见,使其更适应跨模态检索,并有效避免过拟合问题。基于常用数据集Flickr25k和NUS-WIDE进行实验,结果表明,通过该方法构建的模型在Flickr25k数据集上3种哈希位长检索的平均精度均值较DGCPN模型分别提高1.43%、1.82%和1.52%,在NUS-WIDE数据集上分别提高3.72%、3.77%和1.99%,验证了所提方法的有效性。  相似文献   
7.
利用分块相似系数构造感知图像Hash   总被引:1,自引:0,他引:1  
提出一种基于图像分块相似系数的感知稳健图像Hash.先对图像预处理,再进行重叠分块,在密钥控制下,利用高斯低通滤波器生成伪随机参考图像块,分别计算每个分块与参考图像块的相关系数得到图像特征序列.依此将相邻两个分块特征值合并以缩短Hash长度,同时对压缩后的特征序列进行重排,进一步提高图像Hash的安全性.最后对归一化特征值进行量化,并运用Huffman方法对其编码,进一步压缩Hash长度.理论分析和实验结果表明,该图像Hash方法对JPEG压缩、适度的噪声干扰、水印嵌入、图像缩放以及高斯低通滤波等常见图像处理有较好的鲁棒性,能有效区分不同图像,冲突概率低,可用于图像篡改检测.  相似文献   
8.
The problem of efficiently finding similar items in a large corpus of high-dimensional data points arises in many real-world tasks, such as music, image, and video retrieval. Beyond the scaling difficulties that arise with lookups in large data sets, the complexity in these domains is exacerbated by an imprecise definition of similarity. In this paper, we describe a method to learn a similarity function from only weakly labeled positive examples. Once learned, this similarity function is used as the basis of a hash function to severely constrain the number of points considered for each lookup. Tested on a large real-world audio dataset, only a tiny fraction of the points (~0.27%) are ever considered for each lookup. To increase efficiency, no comparisons in the original high-dimensional space of points are required. The performance far surpasses, in terms of both efficiency and accuracy, a state-of-the-art Locality-Sensitive-Hashing-based (LSH) technique for the same problem and data set.  相似文献   
9.
基于散列方法、分级原理和弃大留小筛选原理,通过构造线性单调的散列函数,给出一种(m,n)选择问题的并行算法,并分析算法在具有p个处理机的共享存储并行系统模型上实现的复杂性。理论分析和仿真实验结果表明,本算法是一种可伸缩、简明实用、快速的并行选择算法。  相似文献   
10.
The sign languages used by deaf communities around the world represent a linguistic challenge that natural-language researchers in AI have only recently begun to take up. This challenge is particularly relevant to research in Machine Translation (MT), as natural sign languages have evolved in deaf communities into efficient modes of gestural communication, which differ from English not only in modality but in grammatical structure, exploiting a higher dimensionality of spatial expression. In this paper we describe Zardoz, an on-going AI research system that tackles the cross-modal MT problem, translating English text into fluid sign language. The paper presents an architectural overview of Zardoz, describing its central blackboard organization, the nature of its interlingual representation, and the major components which interact through this blackboard both to analyze the verbal input and generate the corresponding gestural output in one of a number of sign variants.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号