首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Peer-to-peer systems have been widely used for sharing and exchanging data and resources among numerous computer nodes. Various data objects identifiable with high dimensional feature vectors, such as text, images, genome sequences, are starting to leverage P2P technology. Most of the existing works have been focusing on queries on data objects with one or few attributes and thus are not applicable on high dimensional data objects. In this study, we investigate K nearest neighbors query (KNN) on high dimensional data objects in P2P systems. Efficient query algorithm and solutions that address various technical challenges raised by high dimensionality, such as search space resolution and incremental search space refinement, are proposed. An extensive simulation using both synthetic and real data sets demonstrates that our proposal efficiently supports KNN query on high dimensional data in P2P systems.  相似文献   

2.
Peer-to-peer systems have been widely used for sharing and exchanging data and resources among numerous computer nodes. Various data objects identifiable with high dimensional feature vectors, such as text, images, genome sequences, are starting to leverage P2P technology. Most of the existing works have been focusing on queries on data objects with one or few attributes and thus are not applicable on high dimensional data objects. In this study, we investigate K nearest neighbors query (KNN) on high dimensional data objects in P2P systems. Efficient query algorithm and solutions that address various technical challenges raised by high dimensionality, such as search space resolution and incremental search space refinement, are proposed. An extensive simulation using both synthetic and real data sets demonstrates that our proposal efficiently supports KNN query on high dimensional data in P2P systems.  相似文献   

3.
高维数据挖掘算法的研究与进展   总被引:1,自引:1,他引:1  
生物信息学和电子商务应用的迅速发展积累了大量高维数据,对高维数据的挖掘变得越来越重要,一般的数据挖掘方法在处理高维数据时会遇到维灾的问题,同时传统相似性度量在高维空间中也变得没有意义。文章从频繁项集挖掘、聚类、分类等三个方面对最新的高维数据挖掘算法的现状进行了综述,对这些算法如何解决高维数据挖掘存在的问题进行研究。  相似文献   

4.
对包含大流量数据的高维度网络进行异常检测,必须加入维数约简处理以减轻系统在传输和存储方面的压力。介绍高速网络环境下网络流量异常检测过程以及维数约简方式,阐述流量数据常用特征和维数约简技术研究的最新进展。针对网络流量特征选择和流量特征提取2种特征降维方式,对现有算法进行归纳分类,分别描述算法原理及优缺点。此外,给出维数约简常用的数据集和评价指标,分析网络流量异常检测中维数约简技术研究面临的挑战,并对未来发展方向进行展望。  相似文献   

5.
As we all know, a well-designed graph tends to result in good performance for graph-based semi-supervised learning. Although most graph-based semi-supervised dimensionality reduction approaches perform very well on clean data sets, they usually cannot construct a faithful graph which plays an important role in getting a good performance, when performing on the high dimensional, sparse or noisy data. So this will generally lead to a dramatic performance degradation. To deal with these issues, this paper proposes a feasible strategy called relative semi-supervised dimensionality reduction (RSSDR) by utilizing the perceptual relativity to semi-supervised dimensionality reduction. In RSSDR, firstly, relative transformation will be performed over the training samples to build the relative space. It should be indicated that relative transformation improves the distinguishing ability among data points and diminishes the impact of noise on semi-supervised dimensionality reduction. Secondly, the edge weights of neighborhood graph will be determined through minimizing the local reconstruction error in the relative space such that it can preserve the global geometric structure as well as the local one of the data. Extensive experiments on face, UCI, gene expression, artificial and noisy data sets have been provided to validate the feasibility and effectiveness of the proposed algorithm with the promising results both in classification accuracy and robustness.  相似文献   

6.
Effective distance functions in high dimensional data space are very important in solutions for many data mining problems. Recent research has shown that if the Pearson variation of the distance distribution converges to zero with increasing dimensionality, the distance function will become unstable (or meaningless) in high dimensional space, even with the commonly used $L_p$ metric in the Euclidean space. This result has spawned many studies the along the same lines. However, the necessary condition for unstability of a distance function, which is required for function design, remains unknown. In this paper, we shall prove that several important conditions are in fact equivalent to unstability. Based on these theoretical results, we employ some effective and valid indices for testing the stability of a distance function. In addition, this theoretical analysis inspires us that unstable phenomena are rooted in variation of the distance distribution. To demonstrate the theoretical results, we design a meaningful distance function, called the Shrinkage-Divergence Proximity (SDP), based on a given distance function. It is shown empirically that the SDP significantly outperforms other measures in terms of stability in high dimensional data space, and is thus more suitable for distance-based clustering applications.  相似文献   

7.
Large observations and simulations in scientific research give rise to high-dimensional data sets that present many challenges and opportunities in data analysis and visualization. Researchers in application domains such as engineering, computational biology, climate study, imaging and motion capture are faced with the problem of how to discover compact representations of high-dimensional data while preserving their intrinsic structure. In many applications, the original data is projected onto low-dimensional space via dimensionality reduction techniques prior to modeling. One problem with this approach is that the projection step in the process can fail to preserve structure in the data that is only apparent in high dimensions. Conversely, such techniques may create structural illusions in the projection, implying structure not present in the original high-dimensional data. Our solution is to utilize topological techniques to recover important structures in high-dimensional data that contains non-trivial topology. Specifically, we are interested in high-dimensional branching structures. We construct local circle-valued coordinate functions to represent such features. Subsequently, we perform dimensionality reduction on the data while ensuring such structures are visually preserved. Additionally, we study the effects of global circular structures on visualizations. Our results reveal never-before-seen structures on real-world data sets from a variety of applications.  相似文献   

8.
特征选择是数据挖掘和机器学习等领域内重要的预处理步骤,近年来得到了广泛的关注。文本数据的高维性往往会影响分类等数据挖掘任务的效率,因此特征选择常被作为文本分类过程中的重要组成部分,以达到降维的目的。随着分类技术的快速发展,类别的日益细化,文本的多类分类问题为特征选择方法提出了更多的挑战。本文面向文本多类分类的应用背景,阐述了目前特征选择方法所面临的主要挑战,给出了多分类特征选择方法的主要种类。本文沿着相关研究的发展路线,由易至难,由浅入深,通过对目前多分类特征选择算法的应用情况进行总结,并进行综述评论,最后对全文进行了概括,提出了未来可能的研究方向。  相似文献   

9.
Similarity search in high dimensional space is a nontrivial problem due to the so-called curse of dimensionality. Recent techniques such as Piecewise Aggregate Approximation (PAA), Segmented Means (SMEAN) and Mean-Standard deviation (MS) prove to be very effective in reducing data dimensionality by partitioning dimensions into subsets and extracting aggregate values from each dimension subset. These partition-based techniques have many advantages including very efficient multi-phased approximation while being simple to implement. They, however, are not adaptive to the different characteristics of data in diverse applications.We propose SubSpace Projection (SSP) as a unified framework for these partition-based techniques. SSP projects data onto subspaces and computes a fixed number of salient features with respect to a reference vector. A study of the relationships between query selectivity and the corresponding space partitioning schemes uncovers indicators that can be used to predict the performance of the partitioning configuration. Accordingly, we design a greedy algorithm to efficiently determine a good partitioning of the data dimensions. The results of our extensive experiments indicate that the proposed method consistently outperforms state-of-the-art techniques.  相似文献   

10.
流形学习算法综述   总被引:9,自引:3,他引:6       下载免费PDF全文
流形学习算法作为一种新的维数降维方法工具,其目标是发现嵌入在高维数据空间中的低维流形结构,并给出一个有效的低维表示。目前,流形学习已成为模式识别、机器学习和数据挖掘领域的研究热点问题。介绍了流形学习的基本思想、一些最新研究成果及其算法分析,并提出和分析了有待进一步研究的问题。  相似文献   

11.
Temporal coherence principle is an attractive biologically inspired learning rule to extract slowly varying features from quickly varying input data. In this paper we develop a new Nonlinear Neighborhood Preserving (NNP) technique, by utilizing the temporal coherence principle to find an optimal low dimensional representation from the original high dimensional data. NNP is based on a nonlinear expansion of the original input data, such as polynomials of a given degree. It can be solved by the eigenvalue problem without using gradient descent and is guaranteed to find the global optimum. NNP can be viewed as a nonlinear dimensionality reduction framework which takes into consideration both time series and data sets without an obvious temporal structure. According to different situations, we introduce three algorithms of NNP, named NNP-1, NNP-2, and NNP-3. The objective function of NNP-1 is equal to Slow Feature Analysis (SFA), and it works well for time series such as image sequences. NNP-2 artificially constructs time series consisting of neighboring points for data sets without a clear temporal structure such as image data. NNP-3 is proposed for classification tasks, which can minimize the distances of neighboring points in the embedding space and ensure that the remaining points are as far apart as possible simultaneously. Furthermore, the kernel extension of NNP is also discussed in this paper. The proposed algorithms work very well on some image sequences and image data sets compared to other methods. Meanwhile, we perform the classification task on the MNIST handwritten digit database using the supervised NNP algorithms. The experimental results demonstrate that NNP is an effective technique for nonlinear dimensionality reduction tasks.  相似文献   

12.
In supervised dimensionality reduction, tensor representations of images have recently been employed to enhance classification of high dimensional data with small training sets. Previous approaches for handling tensor data have been formulated with tight restrictions on projection directions that, along with convergence issues and the assumption of Gaussian-distributed class data, limit its face-recognition performance. To overcome these problems, we propose a method of rank-one projections with adaptive margins (RPAM) that gives a provably convergent solution for tensor data over a more general class of projections, while accounting for margins between samples of different classes. In contrast to previous margin-based works which determine margin sample pairs within the original high dimensional feature space, RPAM aims instead to maximize the margins defined in the expected lower dimensional feature sub-space by progressive margin refinement after each rank-one projection. In addition to handling tensor data, vector-based variants of RPAM are presented for linear mappings and for nonlinear mappings using kernel tricks. Comprehensive experimental results demonstrate that RPAM brings significant improvement in face recognition over previous subspace learning techniques.  相似文献   

13.
Currently, high dimensional data processing confronts two main difficulties: inefficient similarity measure and high computational complexity in both time and memory space. Common methods to deal with these two difficulties are based on dimensionality reduction and feature selection. In this paper, we present a different way to solve high dimensional data problems by combining the ideas of Random Forests and Anchor Graph semi-supervised learning. We randomly select a subset of features and use the Anchor Graph method to construct a graph. This process is repeated many times to obtain multiple graphs, a process which can be implemented in parallel to ensure runtime efficiency. Then the multiple graphs vote to determine the labels for the unlabeled data. We argue that the randomness can be viewed as a kind of regularization. We evaluate the proposed method on eight real-world data sets by comparing it with two traditional graph-based methods and one state-of-the-art semi-supervised learning method based on Anchor Graph to show its effectiveness. We also apply the proposed method to the subject of face recognition.  相似文献   

14.
针对目前数据降维算法受高维空间样本分布影响效果不佳的问题,提出了一种自适应加权的t分布随机近邻嵌入(t-SNE)算法。该算法对两样本点在高维空间中的欧氏距离进行归一化后按距离的不同分布状况进行分组分析,分别按照近距离、较近距离和远距离三种情况在计算高维空间内样本点间的相似概率时进行自适应加权处理,以加权相对距离代替欧氏绝对距离,从而更真实地度量每一组不同样本在高维空间的相似程度。在高维脑网络状态观测矩阵中的降维实验结果表明,自适应加权t-SNE的降维聚类可视化效果优于其它降维算法,与传统t-SNE算法相比,聚类指标值DBI值平均降低了28.39%,DI值平均提高了161.84%,并且有效地消除了分散、交叉和散点等问题。  相似文献   

15.
基于主成份分析的肿瘤分类检测算法研究   总被引:1,自引:0,他引:1       下载免费PDF全文
基于基因表达谱的肿瘤诊断方法有望成为临床医学上一种快速而有效的诊断方法,但由于基因表达数据存在维数过高、样本量很小以及噪音大等特点,使得提取与肿瘤有关的信息基因成为一件有挑战性的工作。因此,在分析了目前肿瘤分类检测所采用方法的基础上,本文提出了一种结合基因特征记分和主成份分析的混合特征抽取方法。实验表明明,这种方法能够有效地提取分类特征信息,并在保持较高的肿瘤识别准确率的前提下大幅度地降低基因表达数据的维数,使得分类器性能得到很大提高。实验采用了两种与肿瘤有关的基因表达数据集来验证这种混合特征抽取方法的有效性,采用支持向量机的分类实验结果表明,所提出的混合方法不仅交叉验证识别准确率高而且分类结果能够可
可视化。对于结肠癌组织样本集,其交叉验证识别准确率高这95.16%;而对于急性白血病组织样本集,其交叉验证识别准确率高这100%。  相似文献   

16.
In this paper, we propose a novel method named Mixed Kernel CCA (MKCCA) to achieve easy yet accurate implementation of dimensionality reduction. MKCCA consists of two major steps. First, the high dimensional data space is mapped into the reproducing kernel Hilbert space (RKHS) rather than the Hilbert space, with a mixture of kernels, i.e. a linear combination between a local kernel and a global kernel. Meanwhile, a uniform design for experiments with mixtures is also introduced for model selection. Second, in the new RKHS, Kernel CCA is further improved by performing Principal Component Analysis (PCA) followed by CCA for effective dimensionality reduction. We prove that MKCCA can actually be decomposed into two separate components, i.e. PCA and CCA, which can be used to better remove noises and tackle the issue of trivial learning existing in CCA or traditional Kernel CCA. After this, the proposed MKCCA can be implemented in multiple types of learning, such as multi-view learning, supervised learning, semi-supervised learning, and transfer learning, with the reduced data. We show its superiority over existing methods in different types of learning by extensive experimental results.  相似文献   

17.
直接对高维网络连接数据进行处理会出现维数灾难问题,因此,需要对其进行维数约简。非负矩阵分解不仅能对高维数据进行降维,而且使矩阵在分解后的所有分量均为非负值,符合网络连接数据的语义特征。将其应用到入侵检测中,把高维数据投影到低维可视空间上,用散点来表示网络连接记录,通过观察散点所处位置来判断其所属类别,实现入侵检测的可视化。实验验证了这种入侵检测方法的有效性。  相似文献   

18.
Graph-based methods for linear dimensionality reduction have recently attracted much attention and research efforts. The main goal of these methods is to preserve the properties of a graph representing the affinity between data points in local neighborhoods of the high-dimensional space. It has been observed that, in general, supervised graph-methods outperform their unsupervised peers in various classification tasks. Supervised graphs are typically constructed by allowing two nodes to be adjacent only if they are of the same class. However, such graphs are oblivious to the proximity of data from different classes. In this paper, we propose a novel methodology which builds on ‘repulsion graphs’, i.e., graphs that model undesirable proximity between points. The main idea is to repel points from different classes that are close by in the input high-dimensional space. The proposed methodology is generic and can be applied to any graph-based method for linear dimensionality reduction. We provide ample experimental evidence in the context of face recognition, which shows that the proposed methodology (i) offers significant performance improvement to various graph-based methods and (ii) outperforms existing solutions relying on repulsion forces.  相似文献   

19.
图像的无监督聚类就是基于图像数据,在无任何先验信息的情况下将整个图像集合划分成若干子集的过程。由于图像的本征维度很高,在图像处理中会遇到“维数灾难”问题。针对图像无监督聚类的特点,提出了一种图像的扩散界面无监督聚类算法,将图像编码成高维观测空间中的点,再通过投影变换映射到低维特征空间,在低维特征空间中构建扩散界面无监督聚类模型,并在模型中引入维度约简算子,采用循环迭代算法优化扩散界面模型的能量函数。基于最优的扩散界面,将整个图像集合聚类成不同的子集。实验结果表明,扩散界面无监督聚类算法优于传统聚类算法中的K-means算法、DBSCAN算法和Spectral Clustering算法,能够更好地实现图像的无监督聚类,在相同条件下具有更高的准确度。  相似文献   

20.
Document clustering using locality preserving indexing   总被引:7,自引:0,他引:7  
We propose a novel document clustering method which aims to cluster the documents into different semantic classes. The document space is generally of high dimensionality and clustering in such a high dimensional space is often infeasible due to the curse of dimensionality. By using locality preserving indexing (LPI), the documents can be projected into a lower-dimensional semantic space in which the documents related to the same semantics are close to each other. Different from previous document clustering methods based on latent semantic indexing (LSI) or nonnegative matrix factorization (NMF), our method tries to discover both the geometric and discriminating structures of the document space. Theoretical analysis of our method shows that LPI is an unsupervised approximation of the supervised linear discriminant analysis (LDA) method, which gives the intuitive motivation of our method. Extensive experimental evaluations are performed on the Reuters-21578 and TDT2 data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号