首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10818篇
  免费   2648篇
  国内免费   2420篇
工业技术   15886篇
  2024年   69篇
  2023年   275篇
  2022年   469篇
  2021年   543篇
  2020年   617篇
  2019年   571篇
  2018年   561篇
  2017年   632篇
  2016年   679篇
  2015年   734篇
  2014年   868篇
  2013年   866篇
  2012年   1108篇
  2011年   1150篇
  2010年   949篇
  2009年   910篇
  2008年   970篇
  2007年   972篇
  2006年   671篇
  2005年   514篇
  2004年   391篇
  2003年   292篇
  2002年   241篇
  2001年   183篇
  2000年   121篇
  1999年   102篇
  1998年   80篇
  1997年   56篇
  1996年   60篇
  1995年   44篇
  1994年   30篇
  1993年   21篇
  1992年   19篇
  1991年   16篇
  1990年   14篇
  1989年   21篇
  1988年   7篇
  1987年   8篇
  1986年   3篇
  1985年   8篇
  1984年   9篇
  1983年   6篇
  1982年   3篇
  1981年   3篇
  1980年   3篇
  1979年   4篇
  1978年   2篇
  1977年   3篇
  1976年   2篇
  1974年   3篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
1.
现代战场中的无线通信设备日益增多,精准获取个体信息已成为研究热点,但也是难点。针对通信电台,提出了一种分选识别技术。该技术从电台物理层特性出发,对其辐射信号的细微特征进行K-means聚类以实现分选,分选的同时提取各个个体的特征属性值,未知信号通过与特征属性值相关运算实现个体识别。该技术无需先验知识,无需训练运算,通过实验验证,其可行、高效,易于工程实现。  相似文献   
2.
Image color clustering is a basic technique in image processing and computer vision, which is often applied in image segmentation, color transfer, contrast enhancement, object detection, skin color capture, and so forth. Various clustering algorithms have been employed for image color clustering in recent years. However, most of the algorithms require a large amount of memory or a predetermined number of clusters. In addition, some of the existing algorithms are sensitive to the parameter configurations. In order to tackle the above problems, we propose an image color clustering method named Student's t-based density peaks clustering with superpixel segmentation (tDPCSS), which can automatically obtain clustering results, without requiring a large amount of memory, and is not dependent on the parameters of the algorithm or the number of clusters. In tDPCSS, superpixels are obtained based on automatic and constrained simple non-iterative clustering, to automatically decrease the image data volume. A Student's t kernel function and a cluster center selection method are adopted to eliminate the dependence of the density peak clustering on parameters and the number of clusters, respectively. The experiments undertaken in this study confirmed that the proposed approach outperforms k-means, fuzzy c-means, mean-shift clustering, and density peak clustering with superpixel segmentation in the accuracy of the cluster centers and the validity of the clustering results.  相似文献   
3.
The process of electrodeposition can be described in terms of a reaction-diffusion partial differential equation (PDE) system that models the dynamics of the morphology profile and the chemical composition. Here we fit such a model to the different patterns present in a range of electrodeposited and electrochemically modified alloys using PDE constrained optimization. Experiments with simulated data show how the parameter space of the model can be divided into zones corresponding to the different physical patterns by examining the structure of an appropriate cost function. We then use real data to demonstrate how numerical optimization of the cost function can allow the model to fit the rich variety of patterns arising in experiments. The computational technique developed provides a potential tool for tuning experimental parameters to produce desired patterns.  相似文献   
4.
Process object is the instance of process. Vertexes and edges are in the graph of process object. There are different types of the object itself and the associations between object. For the large-scale data, there are many changes reflected. Recently, how to find appropriate real-time data for process object becomes a hot research topic. Data sampling is a kind of finding c hanges o f p rocess o bjects. There i s r equirements f or s ampling to be adaptive to underlying distribution of data stream. In this paper, we have proposed a adaptive data sampling mechanism to find a ppropriate d ata t o m odeling. F irst o f all, we use concept drift to make the partition of the life cycle of process object. Then, entity community detection is proposed to find changes. Finally, we propose stream-based real-time optimization of data sampling. Contributions of this paper are concept drift, community detection, and stream-based real-time computing. Experiments show the effectiveness and feasibility of our proposed adaptive data sampling mechanism for process object.  相似文献   
5.
双语词嵌入通常采用从源语言空间到目标语言空间映射,通过源语言映射嵌入到目标语言空间的最小距离线性变换实现跨语言词嵌入。然而大型的平行语料难以获得,词嵌入的准确率难以提高。针对语料数量不对等、双语语料稀缺情况下的跨语言词嵌入问题,该文提出一种基于小字典不对等语料的跨语言词嵌入方法,首先对单语词向量进行归一化,对小字典词对正交最优线性变换求得梯度下降初始值,然后通过对大型源语言(英语)语料进行聚类,借助小字典找到与每一聚类簇相对应的源语言词,取聚类得到的每一簇词向量均值和源语言与目标语言对应的词向量均值,建立新的双语词向量对应关系,将新建立的双语词向量扩展到小字典中,使得小字典得以泛化和扩展。最后,利用泛化扩展后的字典对跨语言词嵌入映射模型进行梯度下降求得最优值。在英语—意大利语、德语和芬兰语上进行了实验验证,实验结果证明该文方法可以在跨语言词嵌入中减少梯度下降迭代次数,减少训练时间,同时在跨语言词嵌入上表现出较好的正确率。  相似文献   
6.
7.
针对谱聚类融合模糊C-means(FCM)聚类的蛋白质相互作用(PPI)网络功能模块挖掘方法准确率不高、执行效率较低和易受假阳性影响的问题,提出一种基于模糊谱聚类的不确定PPI网络功能模块挖掘(FSC-FM)方法。首先,构建一个不确定PPI网络模型,使用边聚集系数给每一条蛋白质交互作用赋予一个存在概率测度,克服假阳性对实验结果的影响;第二,利用基于边聚集系数流行距离(FEC)策略改进谱聚类中的相似度计算,解决谱聚类算法对尺度参数敏感的问题,进而利用谱聚类算法对不确定PPI网络数据进行预处理,降低数据的维数,提高聚类的准确率;第三,设计基于密度的概率中心选取策略(DPCS)解决模糊C-means算法对初始聚类中心和聚类数目敏感的问题,并对预处理后的PPI数据进行FCM聚类,提高聚类的执行效率以及灵敏度;最后,采用改进的边期望稠密度(EED)对挖掘出的蛋白质功能模块进行过滤。在酵母菌DIP数据集上运行各个算法可知,FSC-FM与基于不确定图模型的检测蛋白质复合物(DCU)算法相比,F-measure值提高了27.92%,执行效率提高了27.92%;与在动态蛋白质相互作用网络中识别复合物的方法(CDUN)、演化算法(EA)、医学基因或蛋白质预测算法(MGPPA)相比也有更高的F-measure值和执行效率。实验结果表明,在不确定PPI网络中,FSC-FM适合用于功能模块的挖掘。  相似文献   
8.
An organization requires performing readiness-relevant activities to ensure successful implementation of an enterprise resource planning (ERP) system. This paper develops a novel approach to managing these interrelated activities to get ready for implementing an ERP system. The approach enables an organization to evaluate its ERP implementation readiness by assessing the degree to which it can achieve the interrelated readiness relevant activities using fuzzy cognitive maps. Based on the interrelationship degrees among the activities, the approach clusters the activities into manageable groups and prioritizes them. To help work out a readiness improvement plan, scenario analysis is conducted.  相似文献   
9.
In modern cloud data centers, reconfigurable devices (FPGAs) are used as an alternative to Graphics Processing Units to accelerate data-intensive computations (e.g., machine learning, image and signal processing). Currently, FPGAs are configured to execute fixed workloads, repeatedly over long periods of time. This conflicts with the needs, proper to cloud computing, to flexibly allocate different workloads and to offer the use of physical devices to multiple users. This raises the need for novel, efficient FPGA scheduling algorithms that can decide execution orders close to the optimum in a short time. In this context, we propose a novel scheduling heuristic where groups of tasks that execute together are interposed by hardware reconfigurations. Our contribution is based on gathering tasks around a high-latency task that hides the latency of tasks, within the same group, that run in parallel and have shorter latencies. We evaluated our solution on a benchmark of 37500 random workloads, synthesized from realistic designs (i.e., topology, resource occupancy). For this testbench, on average, our heuristic produces optimum makespan solutions in 47.4% of the cases. It produces acceptable solutions for moderately constrained systems (i.e., the deadline falls within 10% of the optimum makespan) in 90.1% of the cases.  相似文献   
10.
Electrocardiogram is the most commonly used tool for the diagnosis of cardiologic diseases. In order to help cardiologists to diagnose the arrhythmias automatically, new methods for automated, computer aided ECG analysis are being developed. In this paper, a Modified Artificial Bee Colony (MABC) algorithm for ECG heart beat classification is introduced. It is applied to ECG data set which is obtained from MITBIH database and the result of MABC is compared with seventeen other classifier's accuracy.In classification problem, some features have higher distinctiveness than others. In this study, in order to find higher distinctive features, a detailed analysis has been done on time domain features. By using the right features in MABC algorithm, high classification success rate (99.30%) is obtained. Other methods generally have high classification accuracy on examined data set, but they have relatively low or even poor sensitivities for some beat types. Different data sets, unbalanced sample numbers in different classes have effect on classification result. When a balanced data set is used, MABC provided the best result as 97.96% among all classifiers.Not only part of the records from examined MITBIH database, but also all data from selected records are used to be able to use developed algorithm on a real time system in the future by using additional software modules and making adaptation on a specific hardware.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号