首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   52篇
  免费   7篇
  国内免费   3篇
工业技术   62篇
  2023年   1篇
  2021年   1篇
  2020年   1篇
  2019年   1篇
  2018年   1篇
  2017年   3篇
  2016年   1篇
  2015年   1篇
  2014年   4篇
  2013年   1篇
  2012年   6篇
  2011年   3篇
  2010年   6篇
  2009年   7篇
  2008年   7篇
  2007年   3篇
  2006年   1篇
  2005年   6篇
  2004年   3篇
  2002年   1篇
  2000年   1篇
  1999年   1篇
  1990年   1篇
  1986年   1篇
排序方式: 共有62条查询结果,搜索用时 0 毫秒
1.
ABSTRACT

Understanding the cut and chip (CC) effect in rubber is important for successful product development for tires used in off-road or poor road conditions and for other demanding applications of rubber. This research describes a laboratory testing method for characterising the CC fracture behaviour of rubber using a device that controls and records multiple applied loads and displacements during cyclic impact to the surface of a solid rubber specimen to mimic and quantify the CC damage experienced by tire tread compounds. To study the capabilities of the instrument, three model compounds were studied that are based on carbon black reinforced compounds of common elastomers used in tire treads: natural rubber (NR), styrene-butadiene rubber (SBR), and butadiene rubber (BR). These polymers have well-established CC tendencies in field performance of tire treads, with NR exhibiting the best CC resistance followed by SBR and finally BR. The same trend was found with the rubber impact testing approach that allowed the CC behaviour to be quantified using a new physical parameter which is the CC propensity (P). The relative ranking for CC resistance for the three compounds followed the fatigue crack growth resistances of the materials but was exactly opposite to the ranking of DIN abrasion resistance. This provides evidence that CC damage from impact by mm-scale asperities and abrasion of rubber against μm-scale asperities exhibit distinct characteristics in rubber.  相似文献   
2.
The ability to remember visual stimuli over a short delay period is limited by the small capacity of visual working memory (VWM). Here the authors investigate the role of learning in enhancing VWM. Participants saw 2 spatial arrays separated by a 1-s interval. The 2 arrays were identical except for 1 location. Participants had to detect the difference. Unknown to the participants, some spatial arrays would repeat once every dozen trials or so for up to 32 repetitions. Spatial VWM performance increased significantly when the same location changed across display repetitions, but not at all when different locations changed from one display repetition to another. The authors suggest that a major role of learning in VWM is to mediate which information gets retained, rather than to directly increase VWM capacity. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
3.
针对基于内容的数据分块算法中基本滑动窗口算法不能确定最大数据块的问题,提出一种基于字节指纹极值特征的数据分块算法。算法以上一个块边界点为起点构建最大块长区间,通过定义字节指纹极值域半径函数F并利用函数F值的分布特性,以概率1在允许的最大块长的区间内确定下一个块边界点。该算法克服了基本滑动窗口等分块算法不能确定最大分块长度的不足,其时间复杂度为O(n)。  相似文献   
4.
Chunking is a process to split a file into smaller files called chunks. In some applications, such as remote data compression, data synchronization, and data deduplication, chunking is important because it determines the duplicate detection performance of the system. Content-defined chunking (CDC) is a method to split files into variable length chunks, where the cut points are defined by some internal features of the files. Unlike fixed-length chunks, variable-length chunks are more resistant to byte shifting. Thus, it increases the probability of finding duplicate chunks within a file and between files. However, CDC algorithms require additional computation to find the cut points which might be computationally expensive for some applications. In our previous work (Widodo et al., 2016), the hash-based CDC algorithm used in the system took more process time than other processes in the deduplication system. This paper proposes a high throughput hash-less chunking method called Rapid Asymmetric Maximum (RAM). Instead of using hashes, RAM uses bytes value to declare the cut points. The algorithm utilizes a fix-sized window and a variable-sized window to find a maximum-valued byte which is the cut point. The maximum-valued byte is included in the chunk and located at the boundary of the chunk. This configuration allows RAM to do fewer comparisons while retaining the CDC property. We compared RAM with existing hash-based and hash-less deduplication systems. The experimental results show that our proposed algorithm has higher throughput and bytes saved per second compared to other chunking algorithms.  相似文献   
5.
基于多层过滤的统计机器翻译   总被引:1,自引:0,他引:1  
本文提出了一种基于多层过滤的算法。该算法主要实现从对齐的中英文句子中自动的抽取与对齐双语语块。根据不同语块具备的不同特性,采用不同的层次对其处理。该算法不同于传统的算法,它不需要对句子进行标注,句法分析,词法分析甚至不需要对汉语句子进行分词等操作。初步的实验结果表明该算法性能较好,测试的结果是:抽取语块的准确率能达到F = 0170 ,对齐语块的准确率能达到F = 0180 ;而且将此算法获得的对齐双语语块用于统计机器翻译系统,跟基于词的系统做对比,结果表明基于语块的翻译系统明显提高了翻译水平,差不多能提高10 %。  相似文献   
6.
Chunking in Soar: The Anatomy of a General Learning Mechanism   总被引:4,自引:0,他引:4  
In this article we describe an approach to the construction of a general learning mechanism based on chunking in Soar. Chunking is a learning mechanism that acquires rules from goal-based experience. Soar is a general problem-solving architecture with a rule-based memory. In previous work we have demonstrated how the combination of chunking and Soar could acquire search-control knowledge (strategy acquisition) and operator implementation rules in both search-based puzzle tasks and knowledge-based expert-systems tasks. In this work we examine the anatomy of chunking in Soar and provide a new demonstration of its learning capabilities involving the acquisition and use of macro-operators.  相似文献   
7.
级联中文组块识别   总被引:1,自引:0,他引:1  
基于统计方法的中文组块研究大多借鉴CoNLL2000英文组块的思想,建立了组块表示的BIO模型,并将组块识别任务作为一种为词序列标注的多分类问题.为降低分类复杂度,采取了一种分解识别法,即先识别组块的边界,再进行组块类别判定.基于条件随机场(CRF)构建了级联组块识别器,实验数据集采用宾州大学中文树库(CTB5.1).在特征选择上,借鉴了中文分词特征选择的方法.5倍交叉验证的实验结果为:组块边界识别的F1值为95.05%;类型识别的准确率为99.43%;整体F1值为93.58%.该方法提高了系统性能,缩短了学习器的训练时间.  相似文献   
8.
基于对象的 OpenXML 复合文件去重方法研究   总被引:3,自引:0,他引:3  
现有的重复数据删除技术大部分是基于变长分块(content defined chunking ,CDC)算法的,不考虑不同文件类型的内容特征。这种方法以一种随机的方式确定分块边界并应用于所有文件类型,已经证明其非常适合于文本和简单内容,而不适合非结构化数据构成的复合文件。分析了 OpenXML 标准的复合文件属性,给出了对象提取的基本方法,并提出基于对象分布和对象结构的去重粒度确定算法。目的是对于非结构化数据构成的复合文件,有效地检测不同文件中和同一文件不同位置的相同对象,在文件物理布局改变时也能够有效去重。通过对典型的非结构化数据集合的模拟实验表明,在综合情况下,对象重复数据删除比 CDC 方法提高了10%左右的非结构化数据的去重率。  相似文献   
9.
气候变化对造粒尿素产品质量的影响   总被引:1,自引:1,他引:0  
吴鸿欣 《大氮肥》1999,22(5):321-322
考察气候变化对造粒尿素产品质量的影响,指出提高尿素质量的主要措施。  相似文献   
10.
提出了一种计算英文句子间相似度的方法。基于句子所传递的信息——其描述的对象、描述对象的属性和动作,首先将待比较的两个句子进行语块分析,并从中提取以上三个方面的信息;然后通过语义向量的方法,分别计算两个句子在这三个方面的相似度;最后将它们结合起来作为两个句子的整体相似度,并通过训练得到最优的结合参数。实验表明,提出的方法与目前计算句子间相似度的方法相比更加符合人工判断句子间相似度的过程,表现出更高的准确性,达到了较高的性能指标。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号