首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   539篇
  免费   56篇
  国内免费   30篇
工业技术   625篇
  2022年   7篇
  2021年   7篇
  2020年   6篇
  2019年   3篇
  2018年   4篇
  2017年   13篇
  2016年   21篇
  2015年   18篇
  2014年   29篇
  2013年   31篇
  2012年   36篇
  2011年   50篇
  2010年   25篇
  2009年   42篇
  2008年   37篇
  2007年   44篇
  2006年   43篇
  2005年   34篇
  2004年   30篇
  2003年   30篇
  2002年   27篇
  2001年   15篇
  2000年   12篇
  1999年   9篇
  1998年   10篇
  1997年   9篇
  1996年   6篇
  1995年   7篇
  1994年   5篇
  1993年   4篇
  1992年   1篇
  1991年   3篇
  1990年   2篇
  1989年   1篇
  1988年   1篇
  1985年   1篇
  1981年   2篇
排序方式: 共有625条查询结果,搜索用时 14 毫秒
1.
In a typical embedded CPU, large on-chip storage is critical to meet high performance requirements. However, the fast increasing size of the on-chip storage based on traditional SRAM cells makes the area cost and energy consumption unsustainable for future embedded applications. Replacing SRAM with DRAM on the CPU’s chip is generally considered not worthwhile because DRAM is not compatible with the common CMOS logic and requires additional processing steps beyond what is required for CMOS. However a special DRAM technology, Gain-Cell embedded-DRAM (GC-eDRAM)  [1], [2], [3] is logic compatible and retains some of the good properties of DRAM (small and low power). In this paper we evaluate the performance of a novel hybrid cache memory where the data array, generally populated with SRAM cells, is replaced with GC-eDRAM cells while the tag array continues to use SRAM cells. Our evaluation of this cache demonstrates that, compared to the conventional SRAM-based designs, our novel architecture exhibits comparable performance with less energy consumption and smaller silicon area, enabling the sustainable on-chip storage scaling for future embedded CPUs.  相似文献   
2.
In-network caching in Named Data Networking (NDN) based Internet of Things (IoT) plays a central role for efficient data dissemination. Data cached throughout the network may quickly become obsolete as they are transient and frequently updated by their producers. As such, NDN-based IoT networks impose stringent requirement in terms of data freshness. While various cache replacement policies were proposed, none has considered the cache freshness requirement. In this paper, we introduce a novel cache replacement policy called Least Fresh First (LFF) integrating the cache freshness requirement. LFF evicts invalid cached contents based on time series forecasting of sensors future events. Extensive simulations are performed to evaluate the performance of LFF and to compare it to the different well-known cache replacement policies in ICN-based IoT networks. The obtained results show that LFF significantly improves data freshness compared to other policies, while enhancing the server hit reduction ratio, the hop reduction ratio and the response latency.  相似文献   
3.
System performance improvements are critical for the resource-limited environment of multiple integrated applications executing inside a single distributed real-time and embedded (DRE) system, such as integrated avionics platform or vehtronics systems. While processor caches can effectively reduce execution time there are several factors, such as cache size, system data sharing, and task execution schedule, which make it hard to quantify, predict, and optimize the cache usage of a DRE system. This article presents SMACK, a novel heuristic for estimating the hardware cache usage of a DRE system, and describes a method of varying the runtime behavior of DRE system software without (1) requiring extensive safety recertification or (2) violating the real-time scheduling deadlines. By using SMACK as a maximization target, we were able to reduce integrated DRE system execution time by an average of 2.4% and a maximum of 4.34%.  相似文献   
4.
5.
闵可静  陈勇 《软件》2012,(6):113-115
随着计算机技术的不断发展,图像匹配已经成为图片处理的一个重要部分。在图像匹配中,图像的灰度匹配虽然具有匹配精度高的优点但却需要大量的计算时间,且计算时间随着使用模版的增大而大幅度增长。文章在多核的环境下使用内存优化与处理器亲和力优化方法来解决计算时间长的问题。实验结果表明,使用并行技术并结合内存优化与处理器优化方法可大幅度减少计算时间、提高缓存的命中率、避免乒乓效应的产生使并行程序的加速比与并行效率有所提高。  相似文献   
6.
An efficient access to the contents provided through OGC web services, widely used in environmental information systems, is usually achieved by means of caching strategies. Service-owners may be interested in expressing the conditions required to allow for this. If these conditions are expressed in a machine-readable way, automatic harvesters can be programmed to follow them.This paper proposes a protocol to specify and follow cache policies for OGC web services expressed in a machine-readable language. A preliminary implementation of this protocol has been tested in the EuroGeoSource project, where a number of Web Feature Services providing mineral deposits and energy resources are periodically cached to improve the efficiency and availability of several applications. The protocol addresses a nowadays common case, and can possibly be extended to allow for more detailed policies. Further work will help to determine how it could be integrated into a full Digital Rights Management system.  相似文献   
7.
涂卫平 《电声技术》2011,35(11):54-59
针对DSP上低码率语音编码器的实现和优化问题,研究了片上Cache的分配策略.根据指令Cache的大小,以及程序处理的数据量的大小,将程序分成大小合理的段,分阶段载入Cache中.对数据Cache的分配考虑了Cache结构和数据本身的特点,使有限的数据Cache得到充分的利用.全面考察数据的生命期,使已经载入数据Cac...  相似文献   
8.
提出并实现了基于四路组相联高速缓存的高压缩V-LRU算法。该算法将有效位和近似LRU标志位压缩到只有4位,可以大大减少电路面积,且高速缓存的缺失率基本保持不变。在高速缓存容量为8kByte时,高压缩V-LRU算法的缺失率与7-bit位比较近似V-LRU算法、5-bit位复用近似V-LRU算法基本相同,而相对于9-bit近似V-LRU算法也只增加大约0.9%。基于SMIC 0.13μm工艺,高压缩V-LRU算法的电路面积相对于9-bit、7-bit和5-bit V-LRU算法,分别减少10 925.8μm2、6 415.5μm2和2 142.1μm2。而且,如果增加高速缓存的容量,4种近似V-LRU算法缺失率的差别将变得更小,但是,高压缩V-LRU算法的电路面积优势将会更加明显。  相似文献   
9.
针对AES和CLEFIA的改进Cache踪迹驱动攻击   总被引:1,自引:0,他引:1  
通过分析"Cache失效"踪迹信息和S盒在Cache中不对齐分布特性,提出了一种改进的AES和CLEFIA踪迹驱动攻击方法。现有攻击大都假定S盒在Cache中对齐分布,针对AES和CLEFIA的第1轮踪迹驱动攻击均不能在有限搜索复杂度内获取第1轮扩展密钥。研究表明,在大多数情况下,S盒在Cache中的分布是不对齐的,通过采集加密中的"Cache失效"踪迹信息,200和50个样本分别经AES第1轮和最后1轮分析可将128bit AES主密钥搜索空间降低到216和1,80个样本经CLEFIA第1轮分析可将128bit CLEFIA第1轮扩展密钥搜索空间降低到216,220个样本经前3轮分析可将128bit CLEFIA主密钥搜索空间降低到216,耗时不超过1s。  相似文献   
10.
在多核环境下,对共享L2 Cache的优化显得尤为重要,因为当被访问的数据块不在L2 Cache中时(发生L2缺失),CPU需要花费几百个周期访问主存的代价是相当大的.在设计Cache时,替换算法是考虑的一个重要因素,替换算法的好坏直接影响Cache的性能和计算机的整体性能.虽然LRU替换算法已经被广泛应用在片上Cache中,但是也存在着一些不足:当Cache容量小于程序工作集时,容易产生冲突缺失;且LRU替换算法不考虑数据块被访问的频率.文中把冒泡替换算法应用到多核共享Cache中,同时考虑数据块被访问的频率和最近访问的信息.通过分析实验数据,与LRU替换算法相比,采用冒泡替换算法可以使MPKI(Misses per Kilo instructions)和L2 Cache命中率均有所改善.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号