首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   203篇
  免费   49篇
  国内免费   42篇
工业技术   294篇
  2024年   1篇
  2023年   8篇
  2022年   10篇
  2021年   13篇
  2020年   12篇
  2019年   7篇
  2018年   4篇
  2017年   15篇
  2016年   11篇
  2015年   12篇
  2014年   21篇
  2013年   24篇
  2012年   26篇
  2011年   27篇
  2010年   22篇
  2009年   15篇
  2008年   11篇
  2007年   10篇
  2006年   3篇
  2005年   10篇
  2004年   7篇
  2003年   3篇
  2002年   2篇
  2001年   2篇
  2000年   2篇
  1999年   4篇
  1998年   2篇
  1997年   1篇
  1996年   3篇
  1994年   2篇
  1992年   1篇
  1991年   1篇
  1985年   1篇
  1959年   1篇
排序方式: 共有294条查询结果,搜索用时 15 毫秒
1.
BOOTSTRAP CONTROL CHARTS   总被引:1,自引:0,他引:1  
  相似文献   
2.
The processing of images obtained from satellites often involves highly repetitive calculations on very large amounts of data. This processing is extremely time consuming when these calculations are performed on sequential machines. Parallel computers are well suited to handling computationally expensive operations such as higher order interpolations on large data sets. This paper decribes work undertaken to develop parallel implementations of a set of resampling procedures on an Alliant VFX/4. Each resampling procedure implemented has been optimised in three stages. First, the algorithm has been restructured, where two-dimensional resampling is performed by two one-dimensional resampling operations. Second, each procedure has been reprogrammed in such a way that the autoparallelisation provided by the FX/Fortran compiler has been exploited. Thirdly, data dependent analysis of each procedure has been performed in order to achieve full optimization of each procedure; each procedure has been restructured where appropriate to circumvent vectorisation and concurrency inhibiting data dependencies. The nature and extent of the code optimization achieved for each procedure is presented in this paper. The original code for the most computationally expensive procedure, as targeted at a sequential machine, was found to have an execution time of 4900 seconds on the Alliant VFX/4 when compiled with regular compiler optimization options. Following algorithmic redesigning and reprogramming of the code, as indicated in stage 1 and stage 2, the execution time was reduced to 248 seconds. Restructuring of the code following data dependency analysis indicated in stage 3 in order to avoid data dependencies and allow concurrency and vectorisation, further reduced execution time to 162 seconds. The consequence of this work is that higher-order resampling methods which had not previous been practical are now routinely performed on the Alliant VFX/4 at the University of Dundee.  相似文献   
3.
MODIS数据不仅具有较高的过境频率和光谱分辨率,还具有成本低、覆盖面广等优势。受地球曲率的影响,MODIS L1B数据大多存在一种重叠效应,即Bowtie效应,主要发生在图像的边缘地带,该效应制约了MODIS遥感数据的进一步分析及应用。针对遥感影像几何畸变问题,提出一种不基于传统星历表的Bowtie效应消除算法,采用相关系数法确定每个扫描带的重复行数,根据不同分辨率的MODIS L1B数据,使用相对应的重采样方法对图像进行重采样处理。通过与其他Bowtie效应消除算法的对比实验及分析,证明该算法不仅能够有效去除Bowtie效应,而且执行速度较快,具有较高的工程应用价值。  相似文献   
4.
一种改进残差重采样算法的研究   总被引:1,自引:0,他引:1  
粒子滤波算法对非线性非高斯系统有很好的估计性能,但是存在较为严重的退化问题.重采样算法的提出,有效缓解了粒子滤波器的退化问题.但重采样算法本身也存在一定问题,针对残差重采样和残差系统重采样算法进行研究,并提出改进算法.该改进算法避免了残差重采样算法中的残留粒子重采样问题,减少了运算量,提高了运行效率.仿真结果表明该算法的运行效率明显高于残差重采样和残差系统重采样2种算法,并且随着粒子数目的增加,这种优势表现地更加显著.  相似文献   
5.
针对基于序列重采样(SIR)粒子滤波的检测前跟踪(TBD)算法的不足,提出了重采样平滑(RS)算法以及简化的RS算法,并将其应用于常规雷达的TBD算法,得到基于序列重要性重采样平滑(SIRS)的TBD(SIRS-TBD)算法.仿真结果表明,RS算法可以有效地提高粒子的多样性,在雷达上应用SIRS-TBD算法可以实现对低可探测目标的检测和跟踪.  相似文献   
6.
针对视觉跟踪算法光照自适应能力差的问题,提出了一种对光照变化鲁棒的多特征动态提取跟踪算法。该算法采用高效克服光照影响的特征提取方法,颜色子模型采用模糊直方图方法获取,在同态滤波基础上建立边缘子模型,运动子模型采用改进的三帧差分法提取。该算法还定义了一个新的特征融合模型,把多种互补的观测子模型动态融合,增强了观测模型的准确性,合理量化特征的可靠性使跟踪更稳定。同时采用改进的粒子重采样方法提高了跟踪准确度。实验结果表明,该算法能有效地避免光照变化对跟踪的影响,具有较好的鲁棒性。  相似文献   
7.
针对室内陪护机器人粒子滤波定位方法,研究了四种粒子滤波重采样算法:多项式重采样算法、残差重采样算法、分层重采样算法和系统重采样算法,并分别对其进行仿真比较。实验证明残差重采样算法粒子收敛速度和粒子匮乏程度取折衷,性能优于其它三种重采样算法,在此基础上利用仿真实验结果在HHR-0303服务机器人上进行了实验。实验证明采用残差重采样算法的粒子滤波算法,利用声纳配合里程计定位的方案能达到定位目的。  相似文献   
8.
图像重采样检测是图像取证领域的重要任务,其目的是检测图像是否经过重采样操作。现有的基于深度学习的重采样检测方法大多只针对特定的重采样因子进行研究,而较少考虑重采样因子完全随机的情况。本文根据重采样操作中所涉及的插值技术原理设计了一组高效互补的图像预处理结构以避免图像内容的干扰,并通过可变形卷积层和高效通道注意力机制(efficient channel attention, ECA)分别提取和筛选重采样特征,从而有效提高了卷积神经网络整合提取不同重采样因子的重采样特征的能力。实验结果表明,无论对于未压缩的重采样图像还是JPEG压缩后处理的重采样图像,本文方法都可以有效检测,且预测准确率相比现有方法均有较大提升。  相似文献   
9.
The performance prediction models in the Pavement-ME design software are nationally calibrated using in-service pavement material properties, pavement structure, climate and truck loadings, and performance data obtained from the Long-Term Pavement Performance programme. The nationally calibrated models may not perform well if the inputs and performance data used to calibrate those do not represent the local design and construction practices. Therefore, before implementing the new M-E design procedure, each state highway agency (SHA) should evaluate how well the nationally calibrated performance models predict the measured field performance. The local calibrations of the Pavement-ME performance models are recommended to improve the performance prediction capabilities to reflect the unique conditions and design practices. During the local calibration process, the traditional calibration techniques (split sampling) may not necessarily provide adequate results when limited number of pavement sections are available. Consequently, there is a need to employ statistical and resampling methodologies that are more efficient and robust for model calibrations given the data related challenges encountered by SHAs. The main objectives of the paper are to demonstrate the local calibration of rigid pavement performance models and compare the calibration results based on different resampling techniques. The bootstrap is a non-parametric and robust resampling technique for estimating standard errors and confidence intervals of a statistic. The main advantage of bootstrapping is that model parameters estimation is possible without making distribution assumptions. This paper presents the use of bootstrapping and jackknifing to locally calibrate the transverse cracking and IRI performance models for newly constructed and rehabilitated rigid pavements. The results of the calibration show that the standard error of estimate and bias are lower compared to the traditional sampling methods. In addition, the validation statistics are similar to that of the locally calibrated model, especially for the IRI model, which indicates robustness of the local model coefficients.  相似文献   
10.
结合半监督学习和集成学习方法,提出了一种基于置信度重取样的SemiBoost-CR分类模型.给出了基于标注近邻与未标注近邻的置信度计算公式,按照置信度重采样,不仅选取一定比例置信度较高的未标注样本,而且选取一定比例置信度较低的未标注样本,分别以不同的策略加入到已标注的训练样本集,引入置信度高的未标注样本,用以提高基分类...  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号