全文获取类型
收费全文 | 9122篇 |
免费 | 2361篇 |
国内免费 | 2172篇 |
学科分类
工业技术 | 13655篇 |
出版年
2024年 | 68篇 |
2023年 | 251篇 |
2022年 | 432篇 |
2021年 | 494篇 |
2020年 | 554篇 |
2019年 | 487篇 |
2018年 | 497篇 |
2017年 | 539篇 |
2016年 | 585篇 |
2015年 | 644篇 |
2014年 | 754篇 |
2013年 | 726篇 |
2012年 | 962篇 |
2011年 | 1031篇 |
2010年 | 844篇 |
2009年 | 794篇 |
2008年 | 846篇 |
2007年 | 864篇 |
2006年 | 565篇 |
2005年 | 437篇 |
2004年 | 330篇 |
2003年 | 232篇 |
2002年 | 180篇 |
2001年 | 121篇 |
2000年 | 90篇 |
1999年 | 74篇 |
1998年 | 43篇 |
1997年 | 34篇 |
1996年 | 32篇 |
1995年 | 25篇 |
1994年 | 17篇 |
1993年 | 12篇 |
1992年 | 6篇 |
1991年 | 8篇 |
1990年 | 10篇 |
1989年 | 10篇 |
1988年 | 5篇 |
1987年 | 5篇 |
1986年 | 2篇 |
1985年 | 8篇 |
1984年 | 9篇 |
1983年 | 5篇 |
1982年 | 2篇 |
1981年 | 3篇 |
1980年 | 3篇 |
1979年 | 4篇 |
1978年 | 2篇 |
1977年 | 3篇 |
1976年 | 2篇 |
1974年 | 2篇 |
排序方式: 共有10000条查询结果,搜索用时 156 毫秒
991.
Applying hierarchical grey relation clustering analysis to geographical information systems - A case study of the hospitals in Taipei City 总被引:1,自引:0,他引:1
Deng proposed grey clustering analysis (GCA) in 1987. Later, Jin presented a new method in 1993, called grey relational clustering (GRC) method that combined grey relational analysis with clustering. However, the GRC method cannot use a tree diagram to make appropriate classification decisions without re-computation. This study thus attempts to combine GRC and hierarchical clustering analysis. Given the existence of an excess of medical resources in the Taipei area, this study attempts to understand the degree of concentration of medical resources in this area. Specifically, this study applies a geographical information system (GIS) to present the geographical distribution of hospitals in Taipei. Additionally, a new-type of cluster analysis, known as hierarchical grey relation clustering analysis, is used to analyze the distribution of hospitals and understand how they compete with one another. The analytical results demonstrate that hierarchical grey relation clustering analysis is a suitable method of analyzing geographical position. Tree diagrams can help policymakers make appropriate classification decisions without re-computation. The study results can inform hospitals of their competitors and help them to develop appropriate responses. Additionally, the analytical results can also provide a reference to government or hospital policymakers to help them position hospitals in areas, thus achieving a better distribution of medical resources in Taipei. 相似文献
992.
User profiles are widely used in the age of big data. However, generating and releasing user profiles may cause serious privacy leakage, since a large number of personal data are collected and analyzed. In this paper, we propose a differentially private user profile construction method DP-UserPro, which is composed of DP-CLIQUE and privately top-k tags selection. DP-CLIQUE is a differentially private high dimensional data cluster algorithm based on CLIQUE. The multidimensional tag space is divided into cells, Laplace noises are added into the count value of each cell. Based on the breadthfirst-search, the largest connected dense cells are clustered into a cluster. Then a privately top-k tags selection approach is proposed based on the score function of each tag, to select the most important k tags which can represent the characteristics of the cluster. Privacy and utility of DP-UserPro are theoretically analyzed and experimentally evaluated in the last. Comparison experiments are carried out with Tag Suppression algorithm on two real datasets, to measure the False Negative Rate (FNR) and precision. The results show that DP-UserPro outperforms Tag Suppression by 62.5% in the best case and 14.25% in the worst case on FNR, and DP-UserPro is about 21.1% better on precision than that of Tag Suppression, in average. 相似文献
993.
Dynamic data mining has gained increasing attention in the last decade. It addresses changing data structures which can be observed in many real-life applications, e.g. buying behavior of customers. As opposed to classical, i.e. static data mining where the challenge is to discover pattern inherent in given data sets, in dynamic data mining the challenge is to understand – and in some cases even predict – how such pattern will change over time. Since changes in general lead to uncertainty, the appropriate approaches for uncertainty modeling are needed in order to capture, model, and predict the respective phenomena considered in dynamic environments. As a consequence, the combination of dynamic data mining and soft computing is a very promising research area. The proposed algorithm consists of a dynamic clustering cycle when the data set will be refreshed from time to time. Within this cycle criteria check if the newly arrived data have structurally changed in comparison to the data already analyzed. If yes, appropriate actions are triggered, in particular an update of the initial settings of the cluster algorithm. As we will show, rough clustering offers strong tools to detect such changing data structures. To evaluate the proposed dynamic rough clustering algorithm it has been applied to synthetic as well as to real-world data sets where it provides new insights regarding the underlying dynamic phenomena. 相似文献
994.
CUI Hua YUAN Chao WEI Zefa LI Pannong SONG Xinxin JI Yu LIU Yunfei 《西安电子科技大学学报(自然科学版)》2017,44(6):79-84
Accurate recognition of the traffic condition can proactively alert drivers who will enter the congested road to avoid congestion, so that the degree of congestion will not be increased. And it is also the basis to make scientific decision on active traffic managements, and conducive to alleviate congestion, improve the traffic efficiency, save energy and reduce emission. In this paper, the traffic surveillance videos are sampled every three minutes to build static image database, and the road area is marked as the region of interest (ROI), and then ROI images are normalized in terms of angle and scale. The three image features in ROI, i.e., average gradient, corner and long edge number, are then extracted. Finally, the fuzzy C-means clustering (FCM) method is used to classify the traffic condition into two classifications, i.e., flowing traffic and congestion. Experimental results show that the proposed algorithm can effectively identify the traffic condition involved in the image by the accuracy of 98%. Moreover, compared with the video-based approaches, this method greatly reduces the implementation cost. 相似文献
995.
In vitro digestibility of phenolic compounds from edible fruits: could it be explained by chemometrics?
下载免费PDF全文
![点击此处可从《International Journal of Food Science & Technology》网站下载免费的PDF全文](/ch/ext_images/free.gif)
Francisco J. Olivas‐Aguirre Marcela Gaytán‐Martínez Sandra O. Mendoza‐Díaz Gustavo A. González‐Aguilar Joaquín Rodrigo‐García Nina del Rocío Martínez‐Ruiz Abraham Wall‐Medrano 《International Journal of Food Science & Technology》2017,52(9):2040-2048
The health benefits of phenolic compounds depend on the ingested amount, molecular diversity and gastrointestinal digestibility. The phenolic profile of eight fruits (blackberry, blueberry, strawberry, raspberry, mulberry, pomegranate, green and red globe grapes) was chemometrically associated with their in vitro digestibility (oral, gastric, intestinal). Extractable phenols, flavonoids and anthocyanins strongly correlated with each other (r ≥ 0.84), proanthocyanidins with anthocyanins (r = 0.62) and hydrolysable phenols with both extractable phenols (r = 0.45) and proanthocyanidins (r = ?0.54). Two principal components explained 93% of the variance [61% (free‐phenols), 32% (bounded‐phenols)], and four clusters were confirmed by hierarchical analysis, based in their phenolic richness (CLT 1‐4: low to high) and molecular diversity. In vitro digestibility of extractable phenols and flavonoids was blackberry (CLT‐4)> raspberry (CLT‐2)> red grape (CLT‐1) related to their phenolic richness (r ≥ 0.96; P < 0.001), but anthocyanins’ digestibility was pH‐dependent. Chemometrics is useful to predict the in vitro digestibility of phenolic compounds in the assayed fruits. 相似文献
996.
Automated process discovery techniques aim at extracting process models from information system logs. Existing techniques in this space are effective when applied to relatively small or regular logs, but generate spaghetti-like and sometimes inaccurate models when confronted to logs with high variability. In previous work, trace clustering has been applied in an attempt to reduce the size and complexity of automatically discovered process models. The idea is to split the log into clusters and to discover one model per cluster. This leads to a collection of process models – each one representing a variant of the business process – as opposed to an all-encompassing model. Still, models produced in this way may exhibit unacceptably high complexity and low fitness. In this setting, this paper presents a two-way divide-and-conquer process discovery technique, wherein the discovered process models are split on the one hand by variants and on the other hand hierarchically using subprocess extraction. Splitting is performed in a controlled manner in order to achieve user-defined complexity or fitness thresholds. Experiments on real-life logs show that the technique produces collections of models substantially smaller than those extracted by applying existing trace clustering techniques, while allowing the user to control the fitness of the resulting models. 相似文献
997.
Density based clustering algorithms(DBCLAs)rely on the notion of density to identify clusters of arbitrary shapes,sizes with varying densities.Existing surveys on DB-CLAs cover only a selected set of algorithms.These surveys fail to provide an extensive information about a variety of DBCLAs proposed till date including a taxonomy of the algorithms.In this paper we present a comprehensive survey of various DB-CLAS over last two decades along with their classification.We group the DBCLAs in each of the four categories:density definition,parameter sensitivity,execution mode and nature of*data and further divide them into various classes under each of these categories.In addition,we compare the DBCLAs through their common features and variations in citation and conceptual dependencies.We identify various application areas of DBCLAS in domains such as astronomy,earth sciences,molecular biology,geography,multimedia.Our survey also identifies probable future directions of DBCLAs where involvement of density based methods may lead to favorable results. 相似文献
998.
《Journal of Process Control》2014,24(2):487-497
The Kalman filter algorithm gives an analytical expression for the point estimates of the state estimates, which is the mean of their posterior distribution. Conventional Bayesian state estimators have been developed under the assumption that the mean of the posterior of the states is the ‘best estimate’. While this may hold true in cases where the posterior can be adequately approximated as a Gaussian distribution, in general it may not hold true when the posterior is non-Gaussian. The posterior distribution, however, contains far more information about the states, regardless of its Gaussian or non-Gaussian nature. In this study, the information contained in the posterior distribution is explored and extracted to come up with meaningful estimates of the states. The need for combining Bayesian state estimation with extracting information from the distribution is demonstrated in this work. 相似文献
999.
《Measurement》2014
Owing to the scattered nature of Denial-of-Service attacks, it is tremendously challenging to detect such malicious behavior using traditional intrusion detection systems in Wireless Sensor Networks (WSNs). In the current paper, a hybrid clustering method is introduced, namely a density-based fuzzy imperialist competitive clustering algorithm (D-FICCA). Hereby, the imperialist competitive algorithm (ICA) is modified with a density-based algorithm and fuzzy logic for optimum clustering in WSNs. A density-based clustering algorithm helps improve the imperialist competitive algorithm for the formation of arbitrary cluster shapes as well as handling noise. The fuzzy logic controller (FLC) assimilates to imperialistic competition by adjusting the fuzzy rules to avoid possible errors of the worst imperialist action selection strategy. The proposed method aims to enhance the accuracy of malicious detection. D-FICCA is evaluated on a publicly available dataset consisting of real measurements collected from sensors deployed at the Intel Berkeley Research Lab. Its performance is compared against existing empirical methods, such as K-MICA, K-mean, and DBSCAN. The results demonstrate that the proposed framework achieves higher detection accuracy 87% and clustering quality 0.99 compared to existing approaches. 相似文献
1000.
Automatic network clustering is an important technique for mining the meaningful communities (or clusters) of a network. Communities in a network are clusters of nodes where the intra-cluster connection density is high and the inter-cluster connection density is low. The most popular scheme of automatic network clustering aims at maximizing a criterion function known as modularity in partitioning all the nodes into clusters. But it is found that the modularity suffers from the resolution limit problem, which remains an open challenge. In this paper, the automatic network clustering is formulated as a constrained optimization problem: maximizing a criterion function with a density constraint. With this scheme, the established algorithm can be free from the resolution limit problem. Furthermore, it is found that the density constraint can improve the detection accuracy of the modularity optimization. The efficiency of the proposed scheme is verified by comparative experiments on large scale benchmark networks. 相似文献