首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4370篇
  免费   1003篇
  国内免费   726篇
工业技术   6099篇
  2024年   48篇
  2023年   97篇
  2022年   151篇
  2021年   177篇
  2020年   200篇
  2019年   202篇
  2018年   195篇
  2017年   219篇
  2016年   268篇
  2015年   282篇
  2014年   358篇
  2013年   324篇
  2012年   408篇
  2011年   442篇
  2010年   342篇
  2009年   346篇
  2008年   378篇
  2007年   377篇
  2006年   322篇
  2005年   221篇
  2004年   184篇
  2003年   131篇
  2002年   106篇
  2001年   53篇
  2000年   51篇
  1999年   44篇
  1998年   29篇
  1997年   25篇
  1996年   25篇
  1995年   11篇
  1994年   19篇
  1993年   9篇
  1992年   15篇
  1991年   11篇
  1990年   6篇
  1989年   5篇
  1988年   2篇
  1987年   3篇
  1986年   4篇
  1985年   2篇
  1984年   1篇
  1983年   1篇
  1981年   1篇
  1980年   1篇
  1974年   1篇
  1959年   1篇
  1951年   1篇
排序方式: 共有6099条查询结果,搜索用时 15 毫秒
81.
针对L1范数多核学习方法产生核权重的稀疏解时可能会导致有用信息的丢失和泛化性能退化,Lp范数多核学习方法产生核权重的非稀疏解时会产生很多冗余信息并对噪声敏感,提出了一种通用稀疏多核学习方法。该算法是基于L1范数和Lp范数(p>1) 混合的网状正则化多核学习方法,不仅能灵活的调整稀疏性,而且鼓励核权重的组效应,L1范数和Lp范数多核学习方法可以认为是该方法的特例。该方法引进的混合约束为非线性约束,故对此约束采用二阶泰勒展开式近似,并使用半无限规划来求解该优化问题。实验结果表明,改进后的方法在动态调整稀疏性的前提下能获得较好的分类性能,同时也支持组效应,从而验证了改进后的方法是有效可行的。  相似文献   
82.
It is pretty significant for fault diagnosis timely and accurately to improve the dependability of industrial processes. In this study, fault diagnosis of nonlinear and large-scale processes by variable-weighted kernel Fisher discriminant analysis (KFDA) based on improved biogeography-based optimisation (IBBO) is proposed, referred to as IBBO-KFDA, where IBBO is used to determine the parameters of variable-weighted KFDA, and variable-weighted KFDA is used to solve the multi-classification overlapping problem. The main contributions of this work are four-fold to further improve the performance of KFDA for fault diagnosis. First, a nonlinear fault diagnosis approach with variable-weighted KFDA is developed for maximising separation between the overlapping fault samples. Second, kernel parameters and features selection of variable-weighted KFDA are simultaneously optimised using IBBO. Finally, a single fitness function that combines erroneous diagnosis rate with feature cost is created, a novel mixed kernel function is introduced to improve the classification capability in the feature space and diagnosis accuracy of the IBBO-KFDA, and serves as the target function in the optimisation problem. Moreover, an IBBO approach is developed to obtain the better quality of solution and faster convergence speed. On the one hand, the proposed IBBO-KFDA method is first used on Tennessee Eastman process benchmark data sets to validate the feasibility and efficiency. On the other hand, IBBO-KFDA is applied to diagnose faults of automation gauge control system. Simulation results demonstrate that IBBO-KFDA can obtain better kernel parameters and feature vectors with a lower computing cost, higher diagnosis accuracy and a better real-time capacity.  相似文献   
83.
The centroid-based classifier is both effective and efficient for document classification. However, it suffers from over-fitting and linear inseparability problems caused by its fundamental assumptions. To address these problems, we propose a kernel-based hypothesis margin centroid classifier (KHCC). First, KHCC optimises the class centroids via minimising hypothesis margin under structural risk minimisation principle; second, KHCC uses the kernel method to relieve the problem of linear inseparability in the original feature space. Given the radial basis function, we further discuss a guideline for tuning the value of its parameter. The experimental results on four well-known data-sets indicate that our KHCC algorithm outperforms the state-of-the-art algorithms, especially for the unbalanced data-set.  相似文献   
84.
In this paper, a novel non-parametric Bayesian compressive sensing algorithm is proposed to enhance reconstruction performance of sparse entries with a continuous structure by exploiting the location dependence of entries. An approach is proposed which involves the logistic model and location-dependent Gaussian kernel. The variational Bayesian inference scheme is used to perform the posterior distributions and acquire an approximately analytical solution. Compared to the conventional clustered based methods, which only exploit the information of neighboring pixels, the proposed approach takes the relationship between the pixels of the entire image into account to enable the utilization of the underlying sparse signal structure. It significantly reduces the required number of observations for sparse reconstruction. Both real-valued signal applications, including one-dimension signal and two-dimension image, and complex-valued signal applications, including single-snapshot direction-of-arrival (DOA) estimation of distributed sources and inverse synthetic aperture radar (ISAR) imaging with a limited number of pluses, demonstrate the superiority of the proposed algorithm.  相似文献   
85.
Kernel methods provide high performance in a variety of machine learning tasks. However, the success of kernel methods is heavily dependent on the selection of the right kernel function and proper setting of its parameters. Several sets of kernel functions based on orthogonal polynomials have been proposed recently. Besides their good performance in the error rate, these kernel functions have only one parameter chosen from a small set of integers, and it facilitates kernel selection greatly. Two sets of orthogonal polynomial kernel functions, namely the triangularly modified Chebyshev kernels and the triangularly modified Legendre kernels, are proposed in this study. Furthermore, we compare the construction methods of some orthogonal polynomial kernels and highlight the similarities and differences among them. Experiments on 32 data sets are performed for better illustration and comparison of these kernel functions in classification and regression scenarios. In general, there is difference among these orthogonal polynomial kernels in terms of accuracy, and most orthogonal polynomial kernels can match the commonly used kernels, such as the polynomial kernel, the Gaussian kernel and the wavelet kernel. Compared with these universal kernels, the orthogonal polynomial kernels each have a unique easily optimized parameter, and they store statistically significantly less support vectors in support vector classification. New presented kernels can obtain better generalization performance both for classification tasks and regression tasks.  相似文献   
86.
大多数超椭球聚类(hyper-ellipsoidal clustering,HEC)算法都使用马氏距离作为距离度量,已经证明在该条件下划分聚类的代价函数是常量,导致HEC无法实现椭球聚类.本文说明了使用改进高斯核的HEC算法可以解释为寻找体积和密度都紧凑的椭球分簇,并提出了一种实用HEC算法-K-HEC,该算法能够有效地处理椭球形、不同大小和不同密度的分簇.为实现更复杂形状数据集的聚类,使用定义在核特征空间的椭球来改进K-HEC算法的能力,提出了EK-HEC算法.仿真实验证明所提出算法在聚类结果和性能上均优于K-means算法、模糊C-means算法、GMM-EM算法和基于最小体积椭球(minimum-volume ellipsoids,MVE)的马氏HEC算法,从而证明了本文算法的可行性和有效性.  相似文献   
87.
远监督关系抽取算法能够自动将关系库中的关系与无标注的文本对齐,以进行文本中的关系抽取。目前提出的远监督关系抽取算法中,大多数是基于特征的。然而,此类算法在将实例转换为特征时,经常会出现关键信息不突出、数据集线性不可分等问题,影响关系抽取的效果。该文提出了一种基于模式的远监督关系抽取算法,其中引入了基于模式的向量,并使用了基于核的机器学习算法来克服上述问题。实验结果表明,该文提出的基于模式的远监督关系抽取算法,能够有效地提升远监督关系抽取的准确率。  相似文献   
88.
非下采样轮廓波(Contourlet)变换具有多尺度、多方向特性,能够对图像纹理和结构信息进行精确提取,可以很好地模拟人类视觉系统的多分辨率特性,基于此提出一种基于非下采样Contourlet变换的通用型盲(无参考)图像质量评价算法。首先在空间域上对图像进行非下采样Contourlet变换;然后在各方向带中分别提取能有效反映人类视觉失真程度的特征:高频幅值、平均梯度、信息熵作为图像的特征;最后将其输入到高效的分层多核学习机中学习,预测图像的质量得分。在混合失真型数据库和3个单失真型数据库上的交叉实验结果表明,该算法性能优越,能很好地预测失真图像质量,具有很好的主客观一致性。  相似文献   
89.
文章详细阐述了如何在基于S3C6410微处理器的目标板上搭建嵌入式Linux系统的方法。首先介绍嵌入式Linux的移植方法,接着介绍交叉编译环境的搭建,最后详细讨论了将BootLoader、Linux内核及根文件系统移植到$3C6410的过程。结果证明方法可行,重新搭建后的操作系统在目标板上运行稳定。  相似文献   
90.
传统的单窗复值离散Gabor变换具有固定的时频分辨率,由于受窗函数时宽-带宽之间的制约关系,即不确定性原理限制,其时间分辨率和频率分辨率是矛盾的关系。为了改善传统离散Gabor变换时频分辨率并加快其变换速度,提出了一种基于多高斯窗的实值离散Gabor变换,实验结果表明能有效改善联合时频域内的聚集性,从而提供了一种快捷地计算非平稳信号进化谱方法。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号