首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   183篇
  免费   17篇
  国内免费   14篇
工业技术   214篇
  2023年   1篇
  2022年   7篇
  2021年   7篇
  2020年   7篇
  2019年   11篇
  2018年   1篇
  2017年   5篇
  2016年   2篇
  2015年   2篇
  2014年   7篇
  2013年   10篇
  2012年   8篇
  2011年   17篇
  2010年   10篇
  2009年   15篇
  2008年   10篇
  2007年   7篇
  2006年   13篇
  2005年   13篇
  2004年   10篇
  2003年   4篇
  2002年   1篇
  2001年   7篇
  2000年   7篇
  1999年   2篇
  1998年   2篇
  1997年   7篇
  1996年   4篇
  1995年   2篇
  1994年   4篇
  1993年   2篇
  1992年   5篇
  1988年   1篇
  1987年   1篇
  1984年   1篇
  1982年   1篇
排序方式: 共有214条查询结果,搜索用时 15 毫秒
11.
The normalization data reduction (NDR) technique is an analytical methodology for characterizing the upper shelf fracture toughness of steels in the ductile regime, both in terms of critical toughness (JIc) and resistance to ductile crack extension (J-R curve). It represents an alternative to the more commonly used multi-specimen or single-specimen (unloading compliance and potential drop) techniques.Finite element analyses of a growing crack are executed to evaluate the performance of the technique. This approach has the advantage to remove large uncertainties entailing experimental results. Results demonstrate the precision of the method.  相似文献   
12.
In this paper, we present the application of the fuzzy logic analysis to a Taguchi orthogonal experiment for developing a robust model with high efficiency in multiple performance characteristics (MPCs) of the plasma transfer arc welding (PTAW) hardfacing process. It eliminates uncertain information and is a simple, effective, and efficient approach. A fuzzy logic system is used to simultaneously investigate relationships between various MPCs and to determine the efficiency of each trial of the Taguchi experiments. From the fuzzy inference process, we are able to determine the optimal setting of factor-levels for the MPCs. In addition, we also use the analysis of variance (ANOVA) to identify the significant factors, which coincide with findings from the fuzzy logic analysis and are found to account for about 79% of the total variance. Furthermore, a confirmation experiment of the optimal process is conducted, and it verifies that both individual performance characteristics and MPCs are successfully optimized and satisfy our desired levels of MPCs.  相似文献   
13.
This paper introduces bootstrap error estimation for automatic tuning of parameters in combined networks, applied as front-end preprocessors for a speech recognition system based on hidden Markov models. The method is evaluated on a large-vocabulary (10 000 words) continuous speech recognition task. Bootstrap estimates of minimum mean squared error allow selection of speaker normalization models improving recognition performance. The procedure allows a flexible strategy for dealing with inter-speaker variability without requiring an additional validation set. Recognition results are compared for linear, generalized radial basis functions and multi-layer perceptron network architectures.  相似文献   
14.
Centroid-based categorization is one of the most popular algorithms in text classification. In this approach, normalization is an important factor to improve performance of a centroid-based classifier when documents in text collection have quite different sizes and/or the numbers of documents in classes are unbalanced. In the past, most researchers applied document normalization, e.g., document-length normalization, while some consider a simple kind of class normalization, so-called class-length normalization, to solve the unbalancedness problem. However, there is no intensive work that clarifies how these normalizations affect classification performance and whether there are any other useful normalizations. The purpose of this paper is three folds; (1) to investigate the effectiveness of document- and class-length normalizations on several data sets, (2) to evaluate a number of commonly used normalization functions and (3) to introduce a new type of class normalization, called term-length normalization, which exploits term distribution among documents in the class. The experimental results show that a classifier with weight-merge-normalize approach (class-length normalization) performs better than one with weight-normalize-merge approach (document-length normalization) for the data sets with unbalanced numbers of documents in classes, and is quite competitive for those with balanced numbers of documents. For normalization functions, the normalization based on term weighting performs better than the others on average. For term-length normalization, it is useful for improving classification accuracy. The combination of term- and class-length normalizations outperforms pure class-length normalization and pure term-length normalization as well as unnormalization with the gaps of 4.29%, 11.50%, 30.09%, respectively.  相似文献   
15.
In subarctic regions the ubiquitous presence of rock encrusting lichens compromises the ability to map the reflectance signatures of minerals from imaging spectrometer data. The use of lichen as an endmember in spectral mixture analysis (SMA) may overcome these limitations. Because lichens rarely completely occupy the Instantaneous Field of View (IFOV), it is difficult to define a lichen endmember from an image using visual or automated endmember extraction tools. Spectral similarity of various crustose/foliose lichen species in the short wave infrared (SWIR) suggests that spectral unmixing of rock and lichens may be successfully accomplished using a single lichen endmember for this spectral range. We report the use of a spectral normalization method to minimize differences in SWIR reflectance between five lichen species (U. torrefacta, R. bolanderi, R. geminatum, R. geographicum, A. cinerea). When the normalization is applied to reflectance spectra from 2000-2400 nm acquired for a lichen encrusted quartzite rock sample we show that only a single lichen endmember is required to account for the lichen contribution in the observed mixtures. In contrast, two such endmembers are required when the normalization is not applied to the reflectance data. We illustrate this point using examples where endmembers are extracted manually and automatically, and compare the SMA results against abundances estimated from digital photography. For both the reflectance and normalized reflectance data, SMA results correlate well (R2>0.9) with abundances estimated from digital photography. The use of normalized reflectance implies that any field/laboratory lichen spectrum can be selected as the lichen endmember for SMA of airborne/spaceborne imagery.  相似文献   
16.
一种快速有效的印刷体文字识别算法   总被引:8,自引:1,他引:7       下载免费PDF全文
为了利用低成本的硬件来实现对印刷体文字的快速识别,提出了一种基于多级分类的印刷体文字快速识别算法,该算法从预处理、特征提取,到模式匹配各个阶段,都对传统方法作了合理的改进.该算法首先是采用36×36,而不是传统的48×48点阵进行归一化,从而有效地减少了计算量和字典容量;其次是采用改进的粗外围特征,并进行二重分割,以提高特征的稳定性;最后在各级分类中采用了不同的判别准则,包括绝对值距离、欧氏距离及相似度准则,以适应于时间、准确性的不同要求.同时用该算法对一级汉字7000个样本进行了实验,其结果表明,实际正确识别率(正识率)达95%,前5位累积正识率可达98%,从而为“电子阅读笔”的开发与研制打下了坚实的理论基础.  相似文献   
17.
18.
一种用于测量固体材料含水率的水分传感器已研制成功.它的原理及特点是通过大量实验数据的理论分析得到了各种固体材料含水率的归一化的线性数学德型,运用单片机软件功能实现的.传感器测量范围为4%~50%,精度高于0.5%.  相似文献   
19.
The case is made for normalization of discrete planar objects prior to comparison with test objects and an expression for normalization is derived using a Euclidean distance function based on the underlying continuous boundaries of the objects and their prototypes. The results are given in both the spatial and the frequency domain; an analysis of errors due to the quantization introduced by using a discrete grid is also given.  相似文献   
20.
对复杂动态背景建模技术进行了研究,提出了一种基于像素相似度聚类分析的动态背景建模算法.该算法首先建立融合亮度和色度信息的像素相似度理论,然后用像素相似评价标准对图像各像素点像素值的时间序列进行聚类分析,以建立动态背景模型.最后,利用该背景模型进行了多场景前景检测对比实验和算法内存消耗以及算法单帧处理时间等性能测试.实验结果表明,该算法前景检测准确度高,时间和空间上的复杂度低.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号