首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Though the k-nearest neighbor (k-NN) pattern classifier is an effective learning algorithm, it can result in large model sizes. To compensate, a number of variant algorithms have been developed that condense the model size of the k-NN classifier at the expense of accuracy. To increase the accuracy of these condensed models, we present a direct boosting algorithm for the k-NN classifier that creates an ensemble of models with locally modified distance weighting. An empirical study conducted on 10 standard databases from the UCI repository shows that this new Boosted k-NN algorithm has increased generalization accuracy in the majority of the datasets and never performs worse than standard k-NN.  相似文献   

2.
Recent development of wireless communication technologies and the popularity of smart phones are making location-based services (LBS) popular. However, requesting queries to LBS servers with users’ exact locations may threat the privacy of users. Therefore, there have been many researches on generating a cloaked query region for user privacy protection. Consequently, an effcient query processing algorithm for a query region is required. So, in this paper, we propose k-nearest neighbor query (k-NN) processing algorithms for a query region in road networks. To effciently retrieve k-NN points of interest (POIs), we make use of the Island index. We also propose a method that generates an adaptive Island index to improve the query processing performance and storage usage. Finally, we show by our performance analysis that our k-NN query processing algorithms outperform the existing k-Range Nearest Neighbor (kRNN) algorithm in terms of network expansion cost and query processing time.  相似文献   

3.
Various techniques have been developed for different query types in content-based image retrieval systems such as sampling queries, constrained sampling queries, multiple constrained sampling queries, k-NN queries, constrained k-NN queries, and multiple localized k-NN queries. In this paper, we propose a generalized query model suitable for expressing queries of different types, and investigate efficient processing techniques for this new framework. We exploit sequential access and data sharing by developing new storage and query processing techniques to leverage inter-query concurrency. Our experimental results, based on the Corel dataset, indicate that the proposed optimization can significantly reduce average response time in a multiuser environment, and achieve better retrieval precision and recall compared to two recent techniques.
Ning YuEmail:
  相似文献   

4.
This paper addresses the problem of reinforcing the ability of the k-NN classification of handwritten characters via distortion-tolerant template matching techniques with a limited quantity of data. We compare three kinds of matching techniques: the conventional simple correlation, the tangent distance, and the global affine transformation (GAT) correlation. Although the k-NN classification method is straightforward and powerful, it consumes a lot of time. Therefore, to reduce the computational cost of matching in k-NN classification, we propose accelerating the GAT correlation method by reformulating its computational model and adopting efficient lookup tables. Recognition experiments performed on the IPTP CDROM1B handwritten numerical database show that the matching techniques of the simple correlation, the tangent distance, and the accelerated GAT correlation achieved recognition rates of 97.07%, 97.50%, and 98.70%, respectively. The computation time ratios of the tangent distance and the accelerated GAT correlation to the simple correlation are 26.3 and 36.5 to 1.0, respectively.  相似文献   

5.
The k-nearest-neighbor (k-NN) query is one of the most popular spatial query types for location-based services (LBS). In this paper, we focus on k-NN queries in time-dependent road networks, where the travel time between two locations may vary significantly at different time of the day. In practice, it is costly for a LBS provider to collect real-time traffic data from vehicles or roadside sensors to compute the best route from a user to a spatial object of interest in terms of the travel time. Thus, we design SMashQ, a server-side spatial mashup framework that enables a database server to efficiently evaluate k-NN queries using the route information and travel time accessed from an external Web mapping service, e.g., Microsoft Bing Maps. Due to the expensive cost and limitations of retrieving such external information, we propose three shared execution optimizations for SMashQ, namely, object grouping, direction sharing, and user grouping, to reduce the number of external Web mapping requests and provide highly accurate query answers. We evaluate SMashQ using Microsoft Bing Maps, a real road network, real data sets, and a synthetic data set. Experimental results show that SMashQ is efficient and capable of producing highly accurate query answers.  相似文献   

6.
Little work has been reported in the literature to support k-nearest neighbor (k-NN) searches/queries in hybrid data spaces (HDS). An HDS is composed of a combination of continuous and non-ordered discrete dimensions. This combination presents new challenges in data organization and search ordering. In this paper, we present an algorithm for k-NN searches using a multidimensional index structure in hybrid data spaces. We examine the concept of search stages and use the properties of an HDS to derive a new search heuristic that greatly reduces the number of disk accesses in the initial stage of searching. Further, we present a performance model for our algorithm that estimates the cost of performing such searches. Our experimental results demonstrate the effectiveness of our algorithm and the accuracy of our performance estimation model.  相似文献   

7.
The statistical properties of the k-NN estimators are investigated in a design-based framework, avoiding any assumption about the population under study. The issue of coupling remotely sensed digital imagery with data arising from forest inventories conducted using probabilistic sampling schemes is considered. General results are obtained for the k-NN estimator at the pixel level. When averages (or totals) of forest attributes for the whole study area or sub-areas are of interest, the use of the empirical difference estimator is proposed. The estimator is shown to be approximately unbiased with a variance admitting unbiased or conservative estimators. The performance of the empirical difference estimator is evaluated by an extensive simulation study performed on several populations whose dimensions and covariate values are taken from a real case study. Samples are selected from the populations by means of simple random sampling without replacement. Comparisons with the generalized regression estimator and Horvitz-Thompson estimators are also performed. An application to a local forest inventory on a test area of central Italy is considered.  相似文献   

8.
A spatial k-NN query returns k nearest points in a point dataset to a given query point. To measure the distance between two points, most of the literature focuses on the Euclidean distance or the network distance. For many applications, such as wildlife movement, it is necessary to consider the surface distance, which is computed from the shortest path along a terrain surface. In this paper, we investigate the problem of efficient surface k-NN (sk-NN) query processing. This is an important yet highly challenging problem because the underlying environment data can be very large and the computational cost of finding the shortest path on a surface can be very high. To minimize the amount of surface data to be used and the cost of surface distance computation, a multi-resolution surface distance model is proposed in this paper to take advantage of monotonic distance changes when the distances are computed at different resolution levels. Based on this innovative model, sk-NN queries can be processed efficiently by accessing and processing surface data at a just-enough resolution level within a just-enough search region. Our extensive performance evaluations using real world datasets confirm the efficiency of our proposed model.  相似文献   

9.
Searching in a dataset for elements that are similar to a given query element is a core problem in applications that manage complex data, and has been aided by metric access methods (MAMs). A growing number of applications require indices that must be built faster and repeatedly, also providing faster response for similarity queries. The increase in the main memory capacity and its lowering costs also motivate using memory-based MAMs. In this paper, we propose the Onion-tree, a new and robust dynamic memory-based MAM that slices the metric space into disjoint subspaces to provide quick indexing of complex data. It introduces three major characteristics: (i) a partitioning method that controls the number of disjoint subspaces generated at each node; (ii) a replacement technique that can change the leaf node pivots in insertion operations; and (iii) range and k-NN extended query algorithms to support the new partitioning method, including a new visit order of the subspaces in k-NN queries. Performance tests with both real-world and synthetic datasets showed that the Onion-tree is very compact. Comparisons of the Onion-tree with the MM-tree and a memory-based version of the Slim-tree showed that the Onion-tree was always faster to build the index. The experiments also showed that the Onion-tree significantly improved range and k-NN query processing performance and was the most efficient MAM, followed by the MM-tree, which in turn outperformed the Slim-tree in almost all the tests.  相似文献   

10.
11.
k-nearest neighbor (k-NN) classification is a well-known decision rule that is widely used in pattern classification. However, the traditional implementation of this method is computationally expensive. In this paper we develop two effective techniques, namely, template condensing and preprocessing, to significantly speed up k-NN classification while maintaining the level of accuracy. Our template condensing technique aims at “sparsifying” dense homogeneous clusters of prototypes of any single class. This is implemented by iteratively eliminating patterns which exhibit high attractive capacities. Our preprocessing technique filters a large portion of prototypes which are unlikely to match against the unknown pattern. This again accelerates the classification procedure considerably, especially in cases where the dimensionality of the feature space is high. One of our case studies shows that the incorporation of these two techniques to k-NN rule achieves a seven-fold speed-up without sacrificing accuracy.  相似文献   

12.
Intrusion detection is a necessary step to identify unusual access or attacks to secure internal networks. In general, intrusion detection can be approached by machine learning techniques. In literature, advanced techniques by hybrid learning or ensemble methods have been considered, and related work has shown that they are superior to the models using single machine learning techniques. This paper proposes a hybrid learning model based on the triangle area based nearest neighbors (TANN) in order to detect attacks more effectively. In TANN, the k-means clustering is firstly used to obtain cluster centers corresponding to the attack classes, respectively. Then, the triangle area by two cluster centers with one data from the given dataset is calculated and formed a new feature signature of the data. Finally, the k-NN classifier is used to classify similar attacks based on the new feature represented by triangle areas. By using KDD-Cup ’99 as the simulation dataset, the experimental results show that TANN can effectively detect intrusion attacks and provide higher accuracy and detection rates, and the lower false alarm rate than three baseline models based on support vector machines, k-NN, and the hybrid centroid-based classification model by combining k-means and k-NN.  相似文献   

13.
The k-Nearest Neighbor (k-NN) technique has become extremely popular for a variety of forest inventory mapping and estimation applications. Much of this popularity may be attributed to the non-parametric, multivariate features of the technique, its intuitiveness, and its ease of use. When used with satellite imagery and forest inventory plot data, the technique has been shown to produce useful estimates of many forest attributes including forest/non-forest, volume, and basal area. However, variance estimators for quantifying the uncertainty of means or sums of k-NN pixel-level predictions for areas of interest (AOI) consisting of multiple pixels have not been reported. The primary objectives of the study were to derive variance estimators for AOI estimates obtained from k-NN predictions and to compare precision estimates resulting from different approaches to k-NN prediction and different interpretations of those predictions. The approaches were illustrated by estimating proportion forest area, tree volume per unit area, tree basal area per unit area, and tree density per unit area for 10-km AOIs. Estimates obtained using k-NN approaches and traditional inventory approaches were compared and found to be similar. Further, variance estimates based on different interpretations of k-NN predictions were similar. The results facilitate small area estimation and simultaneous and consistent mapping and estimation of multiple forest attributes.  相似文献   

14.
k-nearest neighbor (k-NN) queries are well-known and widely used in a plethora of applications. However, in the original definition of k-NN queries there is no concern regarding diversity of the answer set with respect to the user’s interests. For instance, travelers may be looking for touristic sites that are close to where they are, but that would also lead them to see different parts of the city. Likewise, if one is looking for restaurants close by, it may be more interesting to learn about restaurants of different categories or ethnicities which are nonetheless relatively close. The interesting novel aspect of this type of query is that there are two competing criteria to be optimized: closeness and diversity. We propose two approaches that leverage the notion of linear skyline queries in order to find the k diverse nearest neighbors within a radius r from a given query point, or (k, r)-DNNs for short. Our proposed approaches return a relatively small set containing all optimal solutions for any linear combination of the weights a user could give to the two competing criteria, and we consider three different notions of diversity: spatial, categorical and angular. Our experiments, varying a number of parameters and exploring synthetic and real datasets, in both Euclidean space and road networks, respectively, show that our approaches are several orders of magnitude faster than a straightforward approach.  相似文献   

15.
High-dimensional problems arising from robot motion planning, biology, data mining, and geographic information systems often require the computation of k nearest neighbor (knn) graphs. The knn graph of a data set is obtained by connecting each point to its k closest points. As the research in the above-mentioned fields progressively addresses problems of unprecedented complexity, the demand for computing knn graphs based on arbitrary distance metrics and large high-dimensional data sets increases, exceeding resources available to a single machine. In this work we efficiently distribute the computation of knn graphs for clusters of processors with message passing. Extensions to our distributed framework include the computation of graphs based on other proximity queries, such as approximate knn or range queries. Our experiments show nearly linear speedup with over 100 processors and indicate that similar speedup can be obtained with several hundred processors.  相似文献   

16.
The nearest neighbor classification method assigns an unclassified point to the class of the nearest case of a set of previously classified points. This rule is independent of the underlying joint distribution of the sample points and their classifications. An extension to this approach is the k-NN method, in which the classification of the unclassified point is made by following a voting criteria within the k nearest points.The method we present here extends the k-NN idea, searching in each class for the k nearest points to the unclassified point, and classifying it in the class which minimizes the mean distance between the unclassified point and the k nearest points within each class. As all classes can take part in the final selection process, we have called the new approach k Nearest Neighbor Equality (k-NNE).Experimental results we obtained empirically show the suitability of the k-NNE algorithm, and its effectiveness suggests that it could be added to the current list of distance based classifiers.  相似文献   

17.
Finding the nearest k objects to a query object is a fundamental operation for many data mining algorithms. With the recent interest in privacy, it is not surprising that there is strong interest in k-NN queries to enable clustering, classification and outlier-detection tasks. However, previous approaches to privacy-preserving k-NN have been costly and can only be realistically applied to small data sets. In this paper, we provide efficient solutions for k-NN queries for vertically partitioned data. We provide the first solution for the L (or Chessboard) metric as well as detailed privacy-preserving computation of all other Minkowski metrics. We enable privacy-preserving L by providing a practical approach to the Yao’s millionaires problem with more than two parties. This is based on a pragmatic and implementable solution to Yao’s millionaires problem with shares. We also provide privacy-preserving algorithms for combinations of local metrics into a global metric that handles the large dimensionality and diversity of attributes common in vertically partitioned data. To manage very large data sets, we provide a privacy-preserving SASH (a very successful data structure for associative queries in high dimensions). Besides providing a theoretical analysis, we illustrate the efficiency of our approach with an empirical evaluation.  相似文献   

18.
Text categorization refers to the task of assigning the pre-defined classes to text documents based on their content. k-NN algorithm is one of top performing classifiers on text data. However, there is little research work on the use of different voting methods over text data. Also, when a huge number of training data is available online, the response speed slows down, since a test document has to obtain the distance with each training data. On the other hand, min–max-modular k-NN (M3-k-NN) has been applied to large-scale text categorization. M3-k-NN achieves a good performance and has faster response speed in a parallel computing environment. In this paper, we investigate five different voting methods for k-NN and M3-k-NN. The experimental results and analysis show that the Gaussian voting method can achieve the best performance among all voting methods for both k-NN and M3-k-NN. In addition, M3-k-NN uses less k-value to achieve the better performance than k-NN, and thus is faster than k-NN in a parallel computing environment. The work of K. Wu and B. L. Lu was supported in part by the National Natural Science Foundation of China under the grants NSFC 60375022 and NSFC 60473040, and the Microsoft Laboratory for Intelligent Computing and Intelligent Systems of Shanghai Jiao Tong University.  相似文献   

19.
Instance-based learning (IBL), so called memory-based reasoning (MBR), is a commonly used non-parametric learning algorithm. k-nearest neighbor (k-NN) learning is the most popular realization of IBL. Due to its usability and adaptability, k-NN has been successfully applied to a wide range of applications. However, in practice, one has to set important model parameters only empirically: the number of neighbors (k) and weights to those neighbors. In this paper, we propose structured ways to set these parameters, based on locally linear reconstruction (LLR). We then employed sequential minimal optimization (SMO) for solving quadratic programming step involved in LLR for classification to reduce the computational complexity. Experimental results from 11 classification and eight regression tasks were promising enough to merit further investigation: not only did LLR outperform the conventional weight allocation methods without much additional computational cost, but also LLR was found to be robust to the change of k.  相似文献   

20.
Nearest neighbor (NN) classification assumes locally constant class conditional probabilities, and suffers from bias in high dimensions with a small sample set. In this paper, we propose a novel cam weighted distance to ameliorate the curse of dimensionality. Different from the existing neighborhood-based methods which only analyze a small space emanating from the query sample, the proposed nearest neighbor classification using the cam weighted distance (CamNN) optimizes the distance measure based on the analysis of inter-prototype relationship. Our motivation comes from the observation that the prototypes are not isolated. Prototypes with different surroundings should have different effects in the classification. The proposed cam weighted distance is orientation and scale adaptive to take advantage of the relevant information of inter-prototype relationship, so that a better classification performance can be achieved. Experiments show that CamNN significantly outperforms one nearest neighbor classification (1-NN) and k-nearest neighbor classification (k-NN) in most benchmarks, while its computational complexity is comparable with that of 1-NN classification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号