首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 157 毫秒
1.
包装产品协同设计多版本增量基分布优化研究   总被引:1,自引:1,他引:0  
刘峰  纪钢 《包装工程》2011,32(1):22-24,32
为了提高包装产品协同设计过程中历史多版本的查询效率,针对完整版本和增量结合的树形结构多版本演化模型,设计并实现了一种以版本间距离程度为度量依据的中间完整基版本选择算法。实验表明,通过该算法选择的增量还原基版本,使多版本还原效率进一步提高;同时,该算法有使用简单、无复杂控制参数、版本树数据冗余易于控制等特点。  相似文献   

2.
为提高Cache的有效容量,进行了Cache压缩研究,并提出了一种区域协作压缩(RCC)方法,以提升最后一级缓存的压缩率。与传统的Cache压缩算法不同,RCC方法利用了缓存区域的压缩局部性,使用缓存区域中第一个缓存块的字典信息来协作压缩缓存区域中的其他各个缓存块,而不需要对缓存区域进行整体压缩。RCC有效发掘了缓存区域内缓存块之间的数据冗余,实现了接近以缓存区域为压缩粒度的字典压缩的压缩率,然而压缩、解压缩延时却仍然和压缩单个缓存块时相当。实验结果表明,与单缓存块压缩算法C-PACK相比,RCC方法的压缩率平均提升了12.34%,系统的性能提升了5%。与2倍容量的非压缩Cache相比,有效容量提升了27%,系统性能提升了8.6%,而面积却减少了63.1%。  相似文献   

3.
为了克服现有多版本并发控制(MVCC)进行数据的并发访问控制中短暂阻塞的缺点,达到读写完全并发,提出了一种基于写时复制的多版本并发B+tree(BCMVBT)索引结构。BCMVBT通过复制分离读写的操作空间以使读写事务在任意时刻完全并发执行,规避比较与交换(CAS)操作带来的高CPU消耗,达到一写多读场景下的完全并发。同时针对现有多版本开发B+tree(MVBT)范围查询的复杂操作,提出了无锁的BCMVBT的范围查询算法和回收机制,从而实现了索引的插入、查询、更新与回收的无锁并发操作。通过与事务型MVBT(transaction MVBT)的对比,在读写并发环境下BCMVBT的时间消耗降低了50%,实验进一步表明BCMVBT在大事务的场景下具有更高的优势。  相似文献   

4.
为提高 JPEG2000 小波变换的数据吞吐能力,提出了一个新的基于用行列模式二维小波正/反变换 VLSI 体系。在此体系中,块存储器以对角存储的方式被划分为八块双口 SRAM,一维离散小波运算单元采用流水线技术设计。此体系可支持 JPEG2000 5/3 和 9/7 两种小波,并且可以节省熵编码所需的码块存储器。设计的一维离散小波运算单元,可以在一个周期内处理四个像素的数据。经测试,此设计工作在 20MHZ 频率下,外加数据缓存时,完成一张 512×512×8 比特的灰白图像“lenna”的三级小波分解需要 13.31ms,不加数据缓存,需要 16.6ms。  相似文献   

5.
为了提高Linux环境下的ZFS文件系统的同步写性能,在研究分析现有Linux环境下实现的ZFS文件系统的块设备同步写性能较低,同步写负载较重时系统吞吐量明显下降的原因的基础上,提出了一种Linux环境下的ZFS同步写优化方法。该方法通过同步写合并提交策略减少日志提交所产生的I/O请求数;使用同步写全局负载均衡机制实现各盘之间同步写的负载均衡来提高同步写的吞吐量。实验结果表明:相比原来在Linux上实现的ZFS.该优化方法提高同步写性能23.9%~88.14%。  相似文献   

6.
目的 为提高连续手语识别准确率,缓解听障人群与非听障人群的沟通障碍。方法 提出了基于全局注意力机制和LSTM的连续手语识别算法。通过帧间差分法对视频数据进行预处理,消除视频冗余帧,借助ResNet网络提取特征序列。通过注意力机制加权,获得全局手语状态特征,并利用LSTM进行时序分析,形成一种基于全局注意力机制和LSTM的连续手语识别算法,实现连续手语识别。结果 实验结果表明,该算法在中文连续手语数据集CSL上的平均识别率为90.08%,平均词错误率为41.2%,与5种算法相比,该方法在识别准确率与翻译性能上具有优势。结论 基于全局注意力机制和LSTM的连续手语识别算法实现了连续手语识别,并且具有较好的识别效果及翻译性能,对促进听障人群无障碍融入社会方面具有积极的意义。  相似文献   

7.
采用沉浸式虚拟现实技术,在典型空间更新范式下研究:①运动对场景识别的影响;②沉浸式虚拟现实技术本身在场景识别研究中的效度。组间变量是测试视点情况(与学习视点相同和与学习视点不同),组内变量是桌子状况(桌子静止和旋转)。因变量是正确判断物体位置改变的比率。结果发现:①视点依赖效应:在相同测试视点的识别成绩显著好于在不同测试视点条件下的识别成绩。②空间更新效应:无论不同测试视点还是在相同视点条件下,桌子静止时的识别成绩显著好于桌子转动条件下的识别成绩。③沉浸式虚拟现实环境下获得与真实环境结果一致的模式,说明沉浸式虚拟现实技术是场景识别研究中的一种十分有效的方法。  相似文献   

8.
生滨  高文  吴迪 《高技术通讯》2006,16(7):666-670
提出了一种符合AVS视频压缩标准的环路滤波器VLSI结构.该结构利用将水平方向相邻块数据分开存储的策略,以及两个可配置的行列转换阵列,提高了数据的利用率,减少了对系统数据总线带宽的占用,加快了环路滤波的处理速度.采用0.18微米CMOS工艺,工作频率为150MHz时,该结构消耗约38K等效逻辑门.仿真结果显示,该电路的处理能力足够支持对AVS高清电视节目(1280×720,60帧/秒)的实时环路滤波.该结构可用于AVS编解码器芯片.  相似文献   

9.
一、引言当前,数字电视网络节点在部署之前,由服务器将密钥或者能产生密钥的信息预先配置在节点中。各个数字电视网络节点之问利用预先保存在其节点的信息,自组织、分布式建立密钥。由于节点存储和能量的限制,最简单的密钥协议就是所有节点共享一个密钥,但是如果一个节点被捕获并取出密码,安全将不复存在。国外著名专家Chan、Perrig、Song在EG协议基础上,提出了q复合模式、多路增强模式,以一定代价,有效地改进EG协议安全性能。当敌人捕获很少的节点时,q复合模式会体现出更好的安全性能;但是随着被捕获节点的增多,q越大,性能反而变差,q复合模式是以额外的计算负载为代价,提高安全性能。多路增强以额外的通讯负载为大家,比较好地提高了安全性能,但多路径增强密钥模型以增加通讯开销为代价,提高了安全性能,是不是划算需要看具体的应用。分析表明,在相同的存储空间的支持下,模型效果可以达到让数字电视网络拓扑实现安全连通,且该模型比随机密钥对模型表现更好。c.Blundo提出的密钥对生成模型是根据定义在有限域F(q)上的一个二元t次多项式f(X,y)完成的。F(x,Y)需要满足对称特性,  相似文献   

10.
为提高在面向服务的网格环境下以简单对象访问协议(SOAP)通信方式收集网格资源监控事件的效率,基于对监控事件依次传输时的低效率和收集者SOAP通信处理负载过大的现象的分析,针对具有较长生命期的监控事件提出了一种高效的收集方法——资源本地和资源间分别采用缓存和协同机制;在监控事件生命期允许的范围内先行跨资源汇聚尽量多的监控事件,再将它们通过单个SOAP消息一并发送至收集者。实验结果表明,所提方法不仅能使网格资源个体监控事件的SOAP传输开销降低50%~85%,还可将收集者上的SOAP通信处理负载降低75%左右。  相似文献   

11.
Efficient cache management plays a vital role in in-memory data-parallel systems, such as Spark, Tez, Storm and HANA. Recent research, notably research on the Least Reference Count (LRC) and Most Reference Distance (MRD) policies, has shown that dependency-aware caching management practices that consider the application’s directed acyclic graph (DAG) perform well in Spark. However, these practices ignore the further relationship between RDDs and cached some redundant RDDs with the same child RDDs, which degrades the memory performance. Hence, in memory-constrained situations, systems may encounter a performance bottleneck due to frequent data block replacement. In addition, the prefetch mechanisms in some cache management policies, such as MRD, are hard to trigger. In this paper, we propose a new cache management method called RDE (Redundant Data Eviction) that can fully utilize applications’ DAG information to optimize the management result. By considering both RDDs’ dependencies and the reference sequence, we effectively evict RDDs with redundant features and perfect the memory for incoming data blocks. Experiments show that RDE improves performance by an average of 55% compared to LRU and by up to 48% and 20% compared to LRC and MRD, respectively. RDE also shows less sensitivity to memory bottlenecks, which means better availability in memory-constrained environments.  相似文献   

12.
Spark is a distributed data processing framework based on memory. Memory allocation is a focus question of Spark research. A good memory allocation scheme can effectively improve the efficiency of task execution and memory resource utilization of the Spark. Aiming at the memory allocation problem in the Spark2.x version, this paper optimizes the memory allocation strategy by analyzing the Spark memory model, the existing cache replacement algorithms and the memory allocation methods, which is on the basis of minimizing the storage area and allocating the execution area according to the demand. It mainly including two parts: cache replacement optimization and memory allocation optimization. Firstly, in the storage area, the cache replacement algorithm is optimized according to the characteristics of RDD Partition, which is combined with PCA dimension. In this section, the four features of RDD Partition are selected. When the RDD cache is replaced, only two most important features are selected by PCA dimension reduction method each time, thereby ensuring the generalization of the cache replacement strategy. Secondly, the memory allocation strategy of the execution area is optimized according to the memory requirement of Task and the memory space of storage area. In this paper, a series of experiments in Spark on Yarn mode are carried out to verify the effectiveness of the optimization algorithm and improve the cluster performance.  相似文献   

13.
Abstract

Two mechanical behaviour models for N – 18 alloy are proposed. The material is a powder metallurgy nickel base superalloy hardened by 60% volume of the ordered γ′ phase. The behaviour of alloy N – 18 is modelled by classical constitutive equations involving plasticity and creep. The experimental data used include stress relaxation and creep tests. An updated version of the first model is proposed and compared to the experimental data set. A new model is also presented with equations based on physical concepts. Material parameter identification is performed for each model, and experimental results are in good agreement with theoretical simulations.  相似文献   

14.
为提高IP-SAN的性能,在清华大学海量存储网络系统(TH-MSNS)存储区域网络的基础上,设计和实现了一种iSCSI环境下的缓存系统。该系统采用服务器内存作为数据的缓存,直接在内存中完成部分读写数据的命令,在存储请求空闲时同步远程网络磁盘数据。通过性能对比测试表明,该缓存系统能够较大地提高IP-SAN存储系统的性能,能增大存储系统的带宽,减少操作延迟。  相似文献   

15.
针对多核处理器上并行程序执行不确定性所造成的并行调试难问题,提出了一种基于硬件的快速确定性重放方法——时间切割者。该方法采用面向并行的记录机制来区分出原执行中并行执行的访存指令块和非并行执行的指令块,并在重放执行中避免串行执行那些在原执行中并行执行的访存指令块,从而使得重放执行的性能开销小。在多核模拟器Sim-Godson上的仿真实验结果表明:该方法的重放速度快,其性能开销仅为2%左右。此外,该方法还具有硬件支持简单特点,未来有望应用于国产多核处理器研制中。  相似文献   

16.
冯德銮  梁仕华 《工程力学》2022,39(6):134-145
土石混合料是由多粒组矿物颗粒集成的多相天然地质材料,具有显著的跨尺度层次物质群体自然特征。为考虑不同粒组颗粒对土石混合料抗剪强度的影响,根据土石混合料变形时不同粒组颗粒的细观运动特征,将其划分为基体和块石两相复合材料。基于剪应力绕流效应和Eshelby-Mori-Tanaka等效夹杂平均应力原理建立考虑块石转动位移的土石混合料细观抗剪强度理论模型。制备不同块石含量的土石混合料工程原位试样进行3组现场大型直剪试验,分析块石含量对土石混合料抗剪强度的影响规律并确定模型参数。试验结果和理论研究均表明:块石显著影响土石混合料的变形特征且土石混合料的抗剪强度随块石含量的增加而增加。基体的剪应力绕流效应诱发块石产生转动抗力和基体出现应力集中而导致土石混合料在变形时比纯基体材料储存或释放更多的能量,是块石对土石混合料抗剪强度增益的细观物理机制。基于细观物理机制的土石混合料剪应力-剪切位移计算公式,初步验证了理论预测与试验结果的一致性。  相似文献   

17.
The performance of flash memory is limited by its “erase-before-write,” and erase operations can only be performed in a much larger unit than write operations. To address these problems, we propose an efficient flash translation layer scheme called BLF: Block List Flash Translation Layer. BLF unites log blocks and physical blocks for servicing update requests. It can avoid uneven erasing and low block utilization. The address translation table of BLF can avoid storing an extra internal mapping table. BLF’s garbage collection method divides the storage zone into three levels: the active zone where hot data are stored, the inactive zone where cold data are stored, and the transitional zone where reclamation block is stored. If invalid blocks can be reclaimed properly and intensively, merging log blocks with physical blocks can be avoided and the amount of operations can be reduced. Finally, we implement an accurate flash simulator to evaluate the efficacy of BLF and compare it with other flash schemes, demonstrating that the improved performance resulting from BLF substantially outperforms other flash schemes.  相似文献   

18.
In distributed storage systems, file access efficiency has an important impact on the real-time nature of information forensics. As a popular approach to improve file accessing efficiency, prefetching model can fetches data before it is needed according to the file access pattern, which can reduce the I/O waiting time and increase the system concurrency. However, prefetching model needs to mine the degree of association between files to ensure the accuracy of prefetching. In the massive small file situation, the sheer volume of files poses a challenge to the efficiency and accuracy of relevance mining. In this paper, we propose a massive files prefetching model based on LSTM neural network with cache transaction strategy to improve file access efficiency. Firstly, we propose a file clustering algorithm based on temporal locality and spatial locality to reduce the computational complexity. Secondly, we propose a definition of cache transaction according to files occurrence in cache instead of time-offset distance based methods to extract file block feature accurately. Lastly, we innovatively propose a file access prediction algorithm based on LSTM neural network which predict the file that have high possibility to be accessed. Experiments show that compared with the traditional LRU and the plain grouping methods, the proposed model notably increase the cache hit rate and effectively reduces the I/O wait time.  相似文献   

19.
Dose conversion coefficients for the lens of the human eye have been calculated for neutron exposure at energies from 1 × 10(-9) to 20 MeV and several standard orientations: anterior-to-posterior, rotational and right lateral. MCNPX version 2.6.0, a Monte Carlo-based particle transport package, was used to determine the energy deposited in the lens of the eye. The human eyeball model was updated by partitioning the lens into sensitive and insensitive volumes as the anterior portion (sensitive volume) of the lens being more radiosensitive and prone to cataract formation. The updated eye model was used with the adult UF-ORNL mathematical phantom in the MCNPX transport calculations.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号