首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
针对传统支持向量回归机缺乏鲁棒性而鲁棒支持向量回归机稀疏性不理想,提出了新的支持向量回归方法(鲁棒双子支持向量回归)。为了求解的方便,该方法的损失函数由两个可微的凸函数构成,并且采用CCCP技术对其进行求解。该方法在获得良好稀疏性的同时有效地抑制了过失误差的影响。通过人工数据和现实真实数据对该方法的测试,验证了新方法的有效性。  相似文献   

2.
Support vector regression (SVR) is a powerful tool in modeling and prediction tasks with widespread application in many areas. The most representative algorithms to train SVR models are Shevade et al.'s Modification 2 and Lin's WSS1 and WSS2 methods in the LIBSVM library. Both are variants of standard SMO in which the updating pairs selected are those that most violate the Karush-Kuhn-Tucker optimality conditions, to which LIBSVM adds a heuristic to improve the decrease in the objective function. In this paper, and after presenting a simple derivation of the updating procedure based on a greedy maximization of the gain in the objective function, we show how cycle-breaking techniques that accelerate the convergence of support vector machines (SVM) in classification can also be applied under this framework, resulting in significantly improved training times for SVR.  相似文献   

3.
为了求解广义支持向量机(GSVM)的优化问题,将带有不等式约束的原始优化问题转化为无约束优化问题,由于此无约束优化问题的目标函数不光滑,所以引入一族多项式光滑函数进行逼近,实验中可以根据不同的精度要求选择不同的逼近函数。用BFGS算法求解。实验结果表明,该算法和已有的GSVM的求解算法相比,更快地获得了更高的测试精度,更适合大规模数据集的训练。因此给出的GSVM的求解算法是有效的。  相似文献   

4.
This paper introduces a cylindricity evaluation algorithm based on support vector machine learning with a specific kernel function, referred to as SVR, as a viable alternative to traditional least square method (LSQ) and non-linear programming algorithm (NLP). Using the theory of support vector machine regression, the proposed algorithm in this paper provides more robust evaluation in terms of CPU time and accuracy than NLP and this is supported by computational experiments. Interestingly, it has been shown that the SVR significantly outperforms LSQ in terms of the accuracy while it can evaluate the cylindricity in a more robust fashion than NLP when the variance of the data points increases. The robust nature of the proposed algorithm is expected because it converts the original nonlinear problem with nonlinear constraints into other nonlinear problem with linear constraints. In addition, the proposed algorithm is programmed using Java Runtime Environment to provide users with a Web based open source environment. In a real-world setting, this would provide manufacturers with an algorithm that can be trusted to give the correct answer rather than making a good part rejected because of inaccurate computational results.  相似文献   

5.
Given a dataset, where each point is labeled with one of M labels, we propose a technique for multi-category proximal support vector classification via generalized eigenvalues (MGEPSVMs). Unlike Support Vector Machines that classify points by assigning them to one of M disjoint half-spaces, here points are classified by assigning them to the closest of M non-parallel planes that are close to their respective classes. When the data contains samples belonging to several classes, classes often overlap, and classifiers that solve for several non-parallel planes may often be able to better resolve test samples. In multicategory classification tasks, a training point may have similarities with prototypes of more than one class. This information can be used in a fuzzy setting. We propose a fuzzy multi-category classifier that utilizes information about the membership of training samples, to improve the generalization ability of the classifier. The desired classifier is obtained by using one-from-rest (OFR) separation for each class, i.e. 1: M -1 classification. Experimental results demonstrate the efficacy of the proposed classifier over MGEPSVMs.  相似文献   

6.
A parallel randomized support vector machine (PRSVM) and a parallel randomized support vector regression (PRSVR) algorithm based on a randomized sampling technique are proposed in this paper. The proposed PRSVM and PRSVR have four major advantages over previous methods. (1) We prove that the proposed algorithms achieve an average convergence rate that is so far the fastest bounded convergence rate, among all SVM decomposition training algorithms to the best of our knowledge. The fast average convergence bound is achieved by a unique priority based sampling mechanism. (2) Unlike previous work (Provably fast training algorithm for support vector machines, 2001) the proposed algorithms work for general linear-nonseparable SVM and general non-linear SVR problems. This improvement is achieved by modeling new LP-type problems based on Karush–Kuhn–Tucker optimality conditions. (3) The proposed algorithms are the first parallel version of randomized sampling algorithms for SVM and SVR. Both the analytical convergence bound and the numerical results in a real application show that the proposed algorithm has good scalability. (4) We present demonstrations of the algorithms based on both synthetic data and data obtained from a real word application. Performance comparisons with SVMlight show that the proposed algorithms may be efficiently implemented.  相似文献   

7.
提出一种新型的基于光滑Ramp损失函数的健壮支持向量机,能够有效抑制孤立点对泛化性能的影响,并采用CCCP将它的非凸优化目标函数转换成连续、二次可微的凸优化。在此基础上,给出训练健壮支持向量机的一种Newton型算法并且分析了算法的收敛性质。实验结果表明,提出的健壮支持向量机对孤立点不敏感,在各种数据集上均获得了比传统的SVMlight算法和Newton-Primal算法更优的泛化能力。  相似文献   

8.
This paper presents a four-step training method for increasing the efficiency of support vector machine (SVM). First, a SVM is initially trained by all the training samples, thereby producing a number of support vectors. Second, the support vectors, which make the hypersurface highly convoluted, are excluded from the training set. Third, the SVM is re-trained only by the remaining samples in the training set. Finally, the complexity of the trained SVM is further reduced by approximating the separation hypersurface with a subset of the support vectors. Compared to the initially trained SVM by all samples, the efficiency of the finally-trained SVM is highly improved, without system degradation.  相似文献   

9.
具有多分段损失函数的多输出支持向量机回归   总被引:1,自引:1,他引:1  
对多维输入、多维输出数据的回归,可以采用多输出支持向量机回归算法.本文介绍具有多分段损失函数的多输出支持向量机回归,其损失函数对落在不同区间的误差值采用不同的惩罚函数形式,并利用变权迭代算法,给出回归函数权系数和偏置的迭代公式.仿真实验表明,该算法的精确性和计算工作量都优于使用多个单输出的支持向量机回归算法.  相似文献   

10.
We propose the reduced twin support vector regressor (RTSVR) that uses the notion of rectangular kernels to obtain significant improvements in execution time over the twin support vector regressor (TSVR), thus facilitating its application to larger sized datasets.  相似文献   

11.
Texture classification using the support vector machines   总被引:12,自引:0,他引:12  
Shutao  James T.  Hailong  Yaonan 《Pattern recognition》2003,36(12):2883-2893
In recent years, support vector machines (SVMs) have demonstrated excellent performance in a variety of pattern recognition problems. In this paper, we apply SVMs for texture classification, using translation-invariant features generated from the discrete wavelet frame transform. To alleviate the problem of selecting the right kernel parameter in the SVM, we use a fusion scheme based on multiple SVMs, each with a different setting of the kernel parameter. Compared to the traditional Bayes classifier and the learning vector quantization algorithm, SVMs, and, in particular, the fused output from multiple SVMs, produce more accurate classification results on the Brodatz texture album.  相似文献   

12.
In cancer classification based on gene expression data, it would be desirable to defer a decision for observations that are difficult to classify. For instance, an observation for which the conditional probability of being cancer is around 1/2 would preferably require more advanced tests rather than an immediate decision. This motivates the use of a classifier with a reject option that reports a warning in cases of observations that are difficult to classify. In this paper, we consider a problem of gene selection with a reject option. Typically, gene expression data comprise of expression levels of several thousands of candidate genes. In such cases, an effective gene selection procedure is necessary to provide a better understanding of the underlying biological system that generates data and to improve prediction performance. We propose a machine learning approach in which we apply the l1 penalty to the SVM with a reject option. This method is referred to as the l1 SVM with a reject option. We develop a novel optimization algorithm for this SVM, which is sufficiently fast and stable to analyze gene expression data. The proposed algorithm realizes an entire solution path with respect to the regularization parameter. Results of numerical studies show that, in comparison with the standard l1 SVM, the proposed method efficiently reduces prediction errors without hampering gene selectivity.  相似文献   

13.
针对最小二乘支持向量回归机(LS-SVR)对异常值较敏感的问题,通过设置异常值所造成的损失上界,提出一种非凸的Ramp损失函数。该损失函数导致相应的优化问题的非凸性,利用凹凸过程(CCCP)将非凸优化问题转化为凸优化问题。给出Newton算法进行求解并分析了算法的计算复杂度。数据集测试的结果表明,与最小二乘支持向量回归机相比,该算法对异常值具有较强的鲁棒性,获得了更优的泛化能力,同时在运行时间上也具有明显优势。  相似文献   

14.
Most existing online algorithms in support vector machines (SVM) can only grow support vectors. This paper proposes an online error tolerance based support vector machine (ET-SVM) which not only grows but also prunes support vectors. Similar to least square support vector machines (LS-SVM), ET-SVM converts the original quadratic program (QP) in standard SVM into a group of easily solved linear equations. Different from LS-SVM, ET-SVM remains support vectors sparse and realizes a compact structure. Thus, ET-SVM can significantly reduce computational time while ensuring satisfactory learning accuracy. Simulation results verify the effectiveness of the newly proposed algorithm.  相似文献   

15.
SVM (support vector machines) techniques have recently arrived to complete the wide range of classification methods for complex systems. These classification systems offer similar performances to other classifiers (such as the neuronal networks or classic statistical classifiers) and they are becoming a valuable tool in industry for the resolution of real problems. One of the fundamental elements of this type of classifier is the metric used for determining the distance between samples of the population to be classified. Although the Euclidean distance measure is the most natural metric for solving problems, it presents certain disadvantages when trying to develop classification systems that can be adapted as the characteristics of the sample space change. Our study proposes a means of avoiding this problem using the multivariate normalization of the inputs (both during the training and classification processes). Using experimental results produced from a significant number of populations, the study confirms the improvement achieved in the classification processes. Lastly, the study demonstrates that the multivariate normalization applied to a real SVM is equivalent to the use of a SVM that uses the Mahalanobis distance measure, for non-normalized data.  相似文献   

16.
In this paper,we design a fuzzy rule-based support vector regression system.The proposed system utilizes the advantages of fuzzy model and support vector regression to extract support vectors to generate fuzzy if-then rules from the training data set.Based on the first-order linear Tagaki-Sugeno (TS) model,the structure of rules is identified by the support vector regression and then the consequent parameters of rules are tuned by the global least squares method.Our model is applied to the real world regression task.The simulation results gives promising performances in terms of a set of fuzzy rules,which can be easily interpreted by humans.  相似文献   

17.
In this paper, we design a fuzzy rule-based support vector regression system. The proposed system utilizes the advantages of fuzzy model and support vector regression to extract support vectors to generate fuzzy if-then rules from the training data set. Based on the first-order hnear Tagaki-Sugeno (TS) model, the structure of rules is identified by the support vector regression and then the consequent parameters of rules are tuned by the global least squares method. Our model is applied to the real world regression task. The simulation results gives promising performances in terms of a set of fuzzy hales, which can be easily interpreted by humans.  相似文献   

18.
In this paper we formulate a least squares version of the recently proposed twin support vector machine (TSVM) for binary classification. This formulation leads to extremely simple and fast algorithm for generating binary classifiers based on two non-parallel hyperplanes. Here we attempt to solve two modified primal problems of TSVM, instead of two dual problems usually solved. We show that the solution of the two modified primal problems reduces to solving just two systems of linear equations as opposed to solving two quadratic programming problems along with two systems of linear equations in TSVM. Classification using nonlinear kernel also leads to systems of linear equations. Our experiments on publicly available datasets indicate that the proposed least squares TSVM has comparable classification accuracy to that of TSVM but with considerably lesser computational time. Since linear least squares TSVM can easily handle large datasets, we further went on to investigate its efficiency for text categorization applications. Computational results demonstrate the effectiveness of the proposed method over linear proximal SVM on all the text corpuses considered.  相似文献   

19.
We present a two-step method to speed-up object detection systems in computer vision that use support vector machines as classifiers. In the first step we build a hierarchy of classifiers. On the bottom level, a simple and fast linear classifier analyzes the whole image and rejects large parts of the background. On the top level, a slower but more accurate classifier performs the final detection. We propose a new method for automatically building and training a hierarchy of classifiers. In the second step we apply feature reduction to the top level classifier by choosing relevant image features according to a measure derived from statistical learning theory. Experiments with a face detection system show that combining feature reduction with hierarchical classification leads to a speed-up by a factor of 335 with similar classification performance.  相似文献   

20.
A robust convex optimization approach is proposed for support vector regression (SVR) with noisy input data. The data points are assumed to be uncertain, but bounded within given hyper-spheres of radius η. The proposed Robust SVR model is equivalent to a Second Order Cone Programming (SOCP) problem. SOCP formulation with Gaussian noise models assumption is discussed. Computational results are presented both on real world and synthetic data sets. The robust SOCP approach is compared with several other regression algorithms such as SVR, least-square SVR, and artificial neural networks by injecting Gaussian noise to each of the data points. The proposed approach out performs the other regression algorithms for some data sets. Moreover, the generalization behavior of the SOCP method is better than the traditional SVR with increasing the uncertainty level η until a threshold value.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号