首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   30篇
  免费   4篇
工业技术   34篇
  2023年   1篇
  2020年   1篇
  2019年   1篇
  2018年   2篇
  2017年   1篇
  2016年   2篇
  2015年   1篇
  2014年   2篇
  2013年   1篇
  2011年   1篇
  2010年   2篇
  2009年   1篇
  2008年   1篇
  2007年   2篇
  2006年   4篇
  2005年   1篇
  2004年   2篇
  2003年   1篇
  2001年   1篇
  1997年   1篇
  1996年   1篇
  1994年   3篇
  1988年   1篇
排序方式: 共有34条查询结果,搜索用时 37 毫秒
1.
The aditi deductive database system   总被引:2,自引:0,他引:2  
Deductive databases generalize relational databases by providing support for recursive views and non-atomic data. Aditi is a deductive system based on the client-server model; it is inherently multi-user and capable of exploiting parallelism on shared-memory multiprocessors. The back-end uses relational technology for efficiency in the management of disk-based data and uses optimization algorithms especially developed for the bottom-up evaluation of logical queries involving recursion. The front-end interacts with the user in a logical language that has more expressive power than relational query languages. We present the structure of Aditi, discuss its components in some detail, and present performance figures.  相似文献   
2.
A key challenge in pattern recognition is how to scale the computational efficiency of clustering algorithms on large data sets. The extension of non‐Euclidean relational fuzzy c‐means (NERF) clustering to very large (VL = unloadable) relational data is called the extended NERF (eNERF) clustering algorithm, which comprises four phases: (i) finding distinguished features that monitor progressive sampling; (ii) progressively sampling from a N × N relational matrix RN to obtain a n × n sample matrix Rn; (iii) clustering Rn with literal NERF; and (iv) extending the clusters in Rn to the remainder of the relational data. Previously published examples on several fairly small data sets suggest that eNERF is feasible for truly large data sets. However, it seems that phases (i) and (ii), i.e., finding Rn, are not very practical because the sample size n often turns out to be roughly 50% of n, and this over‐sampling defeats the whole purpose of eNERF. In this paper, we examine the performance of the sampling scheme of eNERF with respect to different parameters. We propose a modified sampling scheme for use with eNERF that combines simple random sampling with (parts of) the sampling procedures used by eNERF and a related algorithm sVAT (scalable visual assessment of clustering tendency). We demonstrate that our modified sampling scheme can eliminate over‐sampling of the original progressive sampling scheme, thus enabling the processing of truly VL data. Numerical experiments on a distance matrix of a set of 3,000,000 vectors drawn from a mixture of 5 bivariate normal distributions demonstrate the feasibility and effectiveness of the proposed sampling method. We also find that actually running eNERF on a data set of this size is very costly in terms of computation time. Thus, our results demonstrate that further modification of eNERF, especially the extension stage, will be needed before it is truly practical for VL data. © 2008 Wiley Periodicals, Inc.  相似文献   
3.
Intrusion detection faces a number of challenges; an intrusion detection system must reliably detect malicious activities in a network and must perform efficiently to cope with the large amount of network traffic. In this paper, we address these two issues of Accuracy and Efficiency using Conditional Random Fields and Layered Approach. We demonstrate that high attack detection accuracy can be achieved by using Conditional Random Fields and high efficiency by implementing the Layered Approach. Experimental results on the benchmark KDD '99 intrusion data set show that our proposed system based on Layered Conditional Random Fields outperforms other well-known methods such as the decision trees and the naive Bayes. The improvement in attack detection accuracy is very high, particularly, for the U2R attacks (34.8 percent improvement) and the R2L attacks (34.5 percent improvement). Statistical Tests also demonstrate higher confidence in detection accuracy for our method. Finally, we show that our system is robust and is able to handle noisy data without compromising performance.  相似文献   
4.
Patterns Based Classifiers   总被引:1,自引:0,他引:1  
Data mining is one of the most important areas in the 21 century for its applications are wide ranging. This includes medicine, finance, commerce and engineering, to name a few. Pattern mining is amongst the most important and challenging techniques employed in data mining. Patterns are collections of items which satisfy certain properties. Emerging Patterns are those whose frequencies change significantly from one dataset to another. They represent strong contrast knowledge and have been shown very successful for constructing accurate and robust classifiers. In this paper, we examine various kinds of patterns. We also investigate efficient pattern mining techniques and discuss how to exploit patterns to construct effective classifiers.  相似文献   
5.
6.
Debugging is crucial for producing reliable software. One of the effective bug localization techniques is spectral‐based fault localization. It tries to locate a buggy statement by applying an evaluation metric to program spectra and ranking program components on the basis of the score it computes. Here, we propose a restricted class of “hyperbolic” metrics, with a small number of numeric parameters. This class of functions is based on past theoretical and empirical results. We show that optimization methods such as genetic programming and simulated annealing can reliably discover effective metrics over a wide range of data sets of program spectra. We evaluate the performance for both real programs and model programs with single bugs, multiple bugs, “deterministic” bugs, and nondeterministic bugs and find that the proposed class of metrics performs as well as or better than the previous best‐performing metrics over a broad range of data.  相似文献   
7.
The objective of this paper is to develop crash estimation models at traffic analysis zone (TAZ) level as a function of land use characteristics. Crash data and land use data for the City of Charlotte, Mecklenburg County, North Carolina were used to illustrate the development of TAZ level crash estimation models. Negative binomial count models (with log-link) were developed as data was observed to be over-dispersed. Demographic/socio-economic characteristics such as population, the number of household units and employment, traffic indicators such as trip productions and attractions, and, on-network characteristics such as center-lane miles by speed limit were observed to be correlated to land use characteristics, and, hence were not considered in the development of TAZ level crash estimation models. Urban residential commercial, rural district and mixed use district land use variables were observed to be correlated to other land use variables and were also not considered in the development of the models. Results obtained indicate that land use characteristics such as mixed use development, urban residential, single-family residential, multi-family residential, business and, office district are strongly associated and play a statistically significant role in estimating TAZ level crashes. The coefficient for single-family residential area was observed to be negative, indicating a decrease in the number of crashes with an increase in single-family residential area. Models were also developed to estimate these crashes by severity (injury and property damage only crashes). The outcomes can be used in safety conscious planning, land use decisions, long range transportation plans, and, to proactively apply safety treatments in high risk TAZs.  相似文献   
8.
9.
Debugging is crucial for producing reliable software. One of the effective bug localization techniques is spectral‐based fault localization (SBFL). It helps to locate a buggy statement by applying an evaluation metric to program spectra and ranking program components on the basis of the score it computes. SBFL is an example of a dynamic analysis – an analysis of computer program that is performed by executing it with sufficient number of test cases. Static analysis, on the other hand, is performed in a non‐runtime environment. We introduce a weighting technique by combining these two kinds of program analysis. Static analysis is performed to categorize program statements into different classes and giving them weights based on the likelihood of being buggy statement. Statements are finally ranked on the basis of the weights computed by statements' categorization (static analysis) and scores computed by SBFL metrics (dynamic analysis). We evaluate the performance of our technique on Siemens test suite and Flex (having seeded bugs seeded by expert developers), Sed (having mixture of real and seeded bugs), and Space (having real bugs). In our evaluation, proposed weighting technique improves the performance of a wide variety of fault localization metrics up to 20% on single bug datasets and up to 42% on multi‐bug datasets. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   
10.
World Wide Web - This paper proposes a novel approach to safeguarding location privacy for GNN (group nearest neighbor) queries. Given the locations of a group of dispersed users, the GNN query...  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号