首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2114篇
  免费   145篇
  国内免费   154篇
工业技术   2413篇
  2024年   4篇
  2023年   13篇
  2022年   30篇
  2021年   42篇
  2020年   55篇
  2019年   46篇
  2018年   79篇
  2017年   79篇
  2016年   87篇
  2015年   87篇
  2014年   151篇
  2013年   128篇
  2012年   138篇
  2011年   171篇
  2010年   119篇
  2009年   143篇
  2008年   153篇
  2007年   200篇
  2006年   182篇
  2005年   131篇
  2004年   78篇
  2003年   81篇
  2002年   48篇
  2001年   28篇
  2000年   29篇
  1999年   13篇
  1998年   14篇
  1997年   14篇
  1996年   5篇
  1995年   9篇
  1994年   6篇
  1993年   5篇
  1992年   2篇
  1991年   3篇
  1990年   2篇
  1989年   1篇
  1988年   1篇
  1986年   1篇
  1985年   3篇
  1984年   7篇
  1983年   3篇
  1982年   6篇
  1981年   5篇
  1980年   1篇
  1979年   4篇
  1978年   4篇
  1976年   2篇
排序方式: 共有2413条查询结果,搜索用时 15 毫秒
991.
通过数据概化,在多维属性的属性值概念分层上构造少量的具有抽象语义的元组来替换大量具有详细语义的原始元组,从而汇总数据表,这称作表语义汇总。给定原始数据表及其多维属性的属性值的概念分层,表语义汇总的目标是产生规定压缩率且保留尽可能多的语义信息的汇总表。现有算法采用在概化元组集合中寻找最佳概化元组组合的策略将其转换成Set-Covering问题来解决,尽管采取了多种优化策略(如预处理、分级处理)来提高效率,但仍存在转换开销大、算法框架复杂且不易扩展到高维属性等缺点。通过定义多维属性层次结构的度量空间将该问题转换为多维层次空间聚类问题并引入dewey编码来提高转换效率,提出了基于快速收敛的层次凝聚和基于层次空间分辨率调整的两种聚类算法来高效地建立语义汇总表。经真实数据集上的实验表明,新算法在执行效率和汇总质量上都优于现有方法。  相似文献   
992.
针对现有的分簇算法存在因负载能量不均衡而缩短无线传感器网络整体生存时间这一问题,分别对经典分簇算法LEACH的基本思想、分簇机制和簇的通信方式等作了分析。采用修改门限值的方法,对负载能量不均衡的问题进行了改进,并采用网络仿真软件NS2进行仿真分析。仿真结果表明,改进后的算法能够均衡节点的能耗,使分簇更加合理,并有效地延长了网络的生命周期。  相似文献   
993.
付治  王红军  李天瑞  滕飞  张继 《软件学报》2020,31(4):981-990
聚类是机器学习领域中的一个研究热点,弱监督学习是半监督学习中一个重要的研究方向,有广泛的应用场景.在对聚类与弱监督学习的研究中,提出了一种基于k个标记样本的弱监督学习框架.该框架首先用聚类及聚类置信度实现了标记样本的扩展.其次,对受限玻尔兹曼机的能量函数进行改进,提出了基于k个标记样本的受限玻尔兹曼机学习模型.最后,完成了对该模型的推理并设计相关算法.为了完成对该框架和模型的检验,选择公开的数据集进行对比实验,实验结果表明,基于k个标记样本的弱监督学习框架实验效果较好.  相似文献   
994.
郭茂祖  张彬  赵玲玲  张昱 《计算机应用》2020,40(11):3159-3165
针对以往活动语义识别研究单纯提取时间维度上的序列特征以及周期特征、缺乏对空间信息的深度挖掘等问题,提出一种基于联合特征和极限梯度提升(XGBoost)的活动语义识别方法。首先,挖掘时间信息中的活动周期性特征和空间信息中的经纬度特征;然后,使用经纬度信息通过具有噪声的基于密度的聚类(DBSCAN)算法提取空间区域热度特征,将这些特征组成特征向量来刻画用户活动语义;最后,采用集成学习方法中的XGBoost算法建立活动语义识别模型。在FourSquare的两个公共签到数据集上,基于联合特征的模型比基于时间特征的模型在识别准确率上提高了28个百分点,与上下文感知混合(CAH)方法和时空活动偏好(STAP)方法对比,所提方法的识别准确率分别提高了30个百分点和5个百分点。实验结果表明所提方法与对比方法相比在活动语义识别问题上更加准确有效。  相似文献   
995.
In this work, a framework that can automatically create cartoon images with low computation resources and small training datasets is proposed. The proposed system performs region segmentation and learns a region relationship tree from each learning image. The segmented regions are clustered automatically with an enhanced clustering mechanism with no prior knowledge of number of clusters. According to the topology represented by region relationship tree and clustering results, the regions are reassembled to create new images. A swarm intelligence optimization procedure is designed to coordinate the regions to the optimized sizes and positions in the created image. Rigid deformation using moving least squares is performed on the regions to generate more variety for created images. Compared with methods based on Generative Adversarial Networks, the proposed framework can create better images with limited computation resources and a very small amount of training samples.  相似文献   
996.
In this paper, we present a bottom-up approach for robust spotting of texts in scenes. In the proposed technique, character candidates are first detected using our proposed character detector, which leverages on the strengths of an Extremal Region (ER) detector and an Aggregate Channel Feature (ACF) detector for high character detection recall. The real characters are then identified by using a novel convolutional neural network (CNN) filter for high character detection precision. A hierarchical clustering algorithm is designed which combines multiple visual and geometrical features to group characters into word proposal regions for word recognition. The proposed technique has been evaluated on several scene text spotting datasets and experiments show superior spotting performance.  相似文献   
997.
在对非平衡通信文本使用随机下采样来提高分类器性能时,为了解决随机下采样样本发生有偏估计的问题,提出基于否定选择密度聚类的下采样算法(NSDC-DS)。利用否定选择算法的自体异常检测机制改善传统聚类,将样本中心点和待聚类样本分别作为检测器和自体集,对两者进行异常匹配;使用否定选择密度聚类算法对样本相似性进行评估,改进传统的下采样方法,使用NBSVM分类器对采样后的通信样本进行垃圾识别;使用PCA对样本所具有的信息量进行评估,提出改进的PCA-SGD算法对模型参数进行调优,完成通信垃圾文本的半监督识别任务。为了验证改进算法的优越性,使用不平衡通信文本等多个数据集,在否定选择密度聚类、NSDC-DS算法、PCA-SGD与传统模型上进行对比分析。实验结果表明,改进的模型不仅具有较好的通信垃圾文本识别能力,而且具有较快和稳定的收敛速度。  相似文献   
998.
A modified differential evolution (DE) algorithm is presented for clustering the pixels of an image in the gray-scale intensity space. The algorithm requires no prior information about the number of naturally occurring clusters in the image. It uses a kernel induced similarity measure instead of the conventional sum-of-squares distance. Use of the kernel function makes it possible to partition data that is linearly non-separable and non hyper-spherical in the original input space, into homogeneous groups in a transformed high-dimensional feature space. A novel search-variable representation scheme is adopted for selecting the optimal number of clusters from several possible choices. Extensive performance comparison over a test-suite of 10 gray-scale images and objective comparison with manually segmented ground truth indicates that the proposed algorithm has an edge over a few state-of-the-art algorithms for automatic multi-class image segmentation.  相似文献   
999.
In anomaly intrusion detection, modeling the normal behavior of activities performed by a user is an important issue. To extract normal behavior from the activities of a user, conventional data mining techniques are widely applied to a finite audit data set. However, these approaches model only the static behavior of a user in the audit data set. This drawback can be overcome by viewing a user’s continuous activities as an audit data stream. This paper proposes an anomaly intrusion detection method that continuously models the normal behavior of a user over the audit data stream. A set of features is used to represent the characteristics of an activity. For each feature, clusters of feature values corresponding to activities observed thus far in an audit data stream are identified by a statistical grid-based clustering algorithm for a data stream. Each cluster represents the frequency range of the activities with respect to the feature. As a result, without the physical maintenance of any historical activity of the user, the user’s new activities can be continuously reflected in the ongoing results. At the same time, various statistics of activities related to the identified clusters are also modeled to improve the performance of anomaly detection. The proposed algorithm is illustrated by a series of experiments to identify various characteristics.  相似文献   
1000.
In this paper, a new algorithm named polar self-organizing map (PolSOM) is proposed. PolSOM is constructed on a 2-D polar map with two variables, radius and angle, which represent data weight and feature, respectively. Compared with the traditional algorithms projecting data on a Cartesian map by using the Euclidian distance as the only variable, PolSOM not only preserves the data topology and the inter-neuron distance, it also visualizes the differences among clusters in terms of weight and feature. In PolSOM, the visualization map is divided into tori and circular sectors by radial and angular coordinates, and neurons are set on the boundary intersections of circular sectors and tori as benchmarks to attract the data with the similar attributes. Every datum is projected on the map with the polar coordinates which are trained towards the winning neuron. As a result, similar data group together, and data characteristics are reflected by their positions on the map. The simulations and comparisons with Sammon's mapping, SOM and ViSOM are provided based on four data sets. The results demonstrate the effectiveness of the PolSOM algorithm for multidimensional data visualization.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号