首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 23 毫秒
1.
A novel fuzzy compensation multi-class support vector machine   总被引:6,自引:0,他引:6  
This paper presents a novel fuzzy compensation multi-class support vector machine (FCM-SVM) to improve the outlier and noise sensitivity problem of traditional support vector machine (SVM) for multi-class data classification. The basic idea is to give the dual effects to penalty term through treating every data point as both positive and negative classes, but with different memberships. We fuzzify penalty term, compensate weight to classification, reconstruct the optimization problem and its restrictions, reconstruct {Lagrangian} formula, and present the theoretic deduction. By this way the new fuzzy compensation multi-class support vector machine is expected to have more generalization ability while preserving the merit of insensitive to outliers. Experimental results on benchmark data set and real data set show that the proposed method reduces the effect of noise data and yields higher classification rate than traditional multi-class SVM does.  相似文献   

2.
Support vector machines (SVMs), initially proposed for two-class classification problems, have been very successful in pattern recognition problems. For multi-class classification problems, the standard hyperplane-based SVMs are made by constructing and combining several maximal-margin hyperplanes, and each class of data is confined into a certain area constructed by those hyperplanes. Instead of using hyperplanes, hyperspheres that tightly enclosed the data of each class can be used. Since the class-specific hyperspheres are constructed for each class separately, the spherical-structured SVMs can be used to deal with the multi-class classification problem easily. In addition, the center and radius of the class-specific hypersphere characterize the distribution of examples from that class, and may be useful for dealing with imbalance problems. In this paper, we incorporate the concept of maximal margin into the spherical-structured SVMs. Besides, the proposed approach has the advantage of using a new parameter on controlling the number of support vectors. Experimental results show that the proposed method performs well on both artificial and benchmark datasets.  相似文献   

3.
This paper extends the previous work in smooth support vector machine (SSVM) from binary to k-class classification based on a single-machine approach and call it multi-class smooth SVM (MSSVM). This study implements MSSVM for a ternary classification problem and labels it as TSSVM. For the case k>3, this study proposes a one-vs.-one-vs.-rest (OOR) scheme that decomposes the problem into k(k−1)/2 ternary classification subproblems based on the assumption of ternary voting games. Thus, the k-class classification problem can be solved via a series of TSSVMs. The numerical experiments in this study compare the classification accuracy for TSSVM/OOR, one-vs.-one, one-vs.-rest schemes on nine UCI datasets. Results show that TSSVM/OOR outperforms the one-vs.-one and one-vs.-rest for all datasets. This study includes further error analyses to emphasize that the prediction confidence of OOR is significantly higher than the one-vs.-one scheme. Due to the nature of OOR design, it can detect the hidden (unknown) class directly. This study includes a “leave-one-class-out” experiment on the pendigits dataset to demonstrate the detection ability of the proposed OOR method for hidden classes. Results show that OOR performs significantly better than one-vs.-one and one-vs.-rest in the hidden-class detection rate.  相似文献   

4.
In this paper, we propose a support vector machine with automatic confidence (SVMAC) for pattern classification. The main contributions of this work to learning machines are twofold. One is that we develop an algorithm for calculating the label confidence value of each training sample. Thus, the label confidence values of all of the training samples can be considered in training support vector machines. The other one is that we propose a method for incorporating the label confidence value of each training sample into learning and derive the corresponding quadratic programming problems. To demonstrate the effectiveness of the proposed SVMACs, a series of experiments are performed on three benchmarking pattern classification problems and a challenging gender classification problem. Experimental results show that the generalization performance of our SVMACs is superior to that of traditional SVMs.  相似文献   

5.
The support vector machine (SVM) has a high generalisation ability to solve binary classification problems, but its extension to multi-class problems is still an ongoing research issue. Among the existing multi-class SVM methods, the one-against-one method is one of the most suitable methods for practical use. This paper presents a new multi-class SVM method that can reduce the number of hyperplanes of the one-against-one method and thus it returns fewer support vectors. The proposed algorithm works as follows. While producing the boundary of a class, no more hyperplanes are constructed if the discriminating hyperplanes of neighbouring classes happen to separate the rest of the classes. We present a large number of experiments that show that the training time of the proposed method is the least among the existing multi-class SVM methods. The experimental results also show that the testing time of the proposed method is less than that of the one-against-one method because of the reduction of hyperplanes and support vectors. The proposed method can resolve unclassifiable regions and alleviate the over-fitting problem in a much better way than the one-against-one method by reducing the number of hyperplanes. We also present a direct acyclic graph SVM (DAGSVM) based testing methodology that improves the testing time of the DAGSVM method.  相似文献   

6.
Predicting corporate credit-rating using statistical and artificial intelligence (AI) techniques has received considerable research attention in the literature. In recent years, multi-class support vector machines (MSVMs) have become a very appealing machine-learning approach due to their good performance. Until now, researchers have proposed a variety of techniques for adapting support vector machines (SVMs) to multi-class classification, since SVMs were originally devised for binary classification. However, most of them have only focused on classifying samples into nominal categories; thus, the unique characteristic of credit-rating - ordinality - seldom has been considered in the proposed approaches. This study proposes a new type of MSVM classifier (named OMSVM) that is designed to extend the binary SVMs by applying an ordinal pairwise partitioning (OPP) strategy. Our model can efficiently and effectively handle multiple ordinal classes. To validate OMSVM, we applied it to a real-world case of bond rating. We compared the results of our model with those of conventional MSVM approaches and other AI techniques including MDA, MLOGIT, CBR, and ANNs. The results showed that our proposed model improves the performance of classification in comparison to other typical multi-class classification techniques and uses fewer computational resources.  相似文献   

7.
Research surface electromyogram (s-EMG) signal recognition using neural networks is a method which identifies the relation between s-EMG patterns. However, it is not sufficiently satisfying for the user because s-EMG signals change according to muscle wasting or to changes in the electrode position, etc. A support vector machine (SVM) is one of the most powerful tools for solving classification problems, but it does not have an online learning technique. In this article, we propose an online learning method using SVM with a pairwise coupling technique for s-EMG recognition. We compared its performance with the original SVM and a neural network. Simulation results showed that our proposed method is better than the original SVM. This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January 31–February 2, 2008  相似文献   

8.
Support vector machines (SVMs) are theoretically well-justified machine learning techniques, which have also been successfully applied to many real-world domains. The use of optimization methodologies plays a central role in finding solutions of SVMs. This paper reviews representative and state-of-the-art techniques for optimizing the training of SVMs, especially SVMs for classification. The objective of this paper is to provide readers an overview of the basic elements and recent advances for training SVMs and enable them to develop and implement new optimization strategies for SVM-related research at their disposal.  相似文献   

9.
This paper proposes a locality correlation preserving based support vector machine (LCPSVM) by combining the idea of margin maximization between classes and local correlation preservation of class data. It is a Support Vector Machine (SVM) like algorithm, which explicitly considers the locality correlation within each class in the margin and the penalty term of the optimization function. Canonical correlation analysis (CCA) is used to reveal the hidden correlations between two datasets, and a variant of correlation analysis model which implements locality preserving has been proposed by integrating local information into the objective function of CCA. Inspired by the idea used in canonical correlation analysis, we propose a locality correlation preserving within-class scatter matrix to replace the within-class scatter matrix in minimum class variance support machine (MCVSVM). This substitution has the property of keeping the locality correlation of data, and inherits the properties of SVM and other similar modified class of support vector machines. LCPSVM is discussed under linearly separable, small sample size and nonlinearly separable conditions, and experimental results on benchmark datasets demonstrate its effectiveness.  相似文献   

10.
In this paper we discuss sparse least squares support vector machines (sparse LS SVMs) trained in the empirical feature space, which is spanned by the mapped training data. First, we show that the kernel associated with the empirical feature space gives the same value with that of the kernel associated with the feature space if one of the arguments of the kernels is mapped into the empirical feature space by the mapping function associated with the feature space. Using this fact, we show that training and testing of kernel-based methods can be done in the empirical feature space and that training of LS SVMs in the empirical feature space results in solving a set of linear equations. We then derive the sparse LS SVMs restricting the linearly independent training data in the empirical feature space by the Cholesky factorization. Support vectors correspond to the selected training data and they do not change even if the value of the margin parameter is changed. Thus for linear kernels, the number of support vectors is the number of input variables at most. By computer experiments we show that we can reduce the number of support vectors without deteriorating the generalization ability.
Shigeo AbeEmail:

Shigeo Abe   received the B.S. degree in Electronics Engineering, the M.S. degree in Electrical Engineering, and the Dr. Eng. degree, all from Kyoto University, Kyoto, Japan in 1970, 1972, and 1984, respectively. After 25 years in the industry, he was appointed as full professor of Electrical Engineering, Kobe University in April 1997. He is now a professor of Graduate School of Science and Technology, Kobe University. His research interests include pattern classification and function approximation using neural networks, fuzzy systems, and support vector machines. He is the author of Neural Networks and Fuzzy Systems (Kluwer, 1996), Pattern Classification (Springer, 2001), and Support Vector Machines for Pattern Classification (Springer, 2005). Dr. Abe was awarded an outstanding paper prize from the Institute of Electrical Engineers of Japan in 1984 and 1995. He is a member of IEEE, INNS, and several Japanese Societies.  相似文献   

11.
SVM has been receiving increasing interest in areas ranging from its original application in pattern recognition to other applications such as regression estimation due to its remarkable generalization performance. Unfortunately, SVM is currently considerably slower in test phase caused by number of the support vectors, which has been a serious limitation for some applications. To overcome this problem, we proposed an adaptive algorithm named feature vectors selection (FVS) to select the feature vectors from the support vector solutions, which is based on the vector correlation principle and greedy algorithm. Through the adaptive algorithm, the sparsity of solution is improved and the time cost in testing is reduced. To select the number of the feature vectors adaptively by the requirements, the generalization and complexity trade-off can be directly controlled. The computer simulations on regression estimation and pattern recognition show that FVS is a promising algorithm to simplify the solution for support vector machine.  相似文献   

12.
Roller bearing is one of the most widely used rotary elements in a rotary machine. The roller bearing’s nature of vibration reveals its condition and the features that show the nature are to be extracted through some indirect means. Statistical parameters like kurtosis, standard deviation, maximum value, etc. form a set of features, which are widely used in fault diagnostics. Finding out good features that discriminate the different fault conditions of the bearing is often a problem. Selection of good features is an important phase in pattern recognition and requires detailed domain knowledge. This paper addresses the feature selection process using decision tree and uses kernel based neighborhood score multi-class support vector machine (MSVM) for classification. The vibration signal from a piezoelectric transducer is captured for the following conditions: good bearing, bearing with inner race fault, bearing with outer race fault, and inner and outer race faults. The statistical features are extracted therefrom and classified successfully using MSVM. The results of MSVM are compared with and binary support vector machine (SVM).  相似文献   

13.
Existing multi-label support vector machine (Rank-SVM) has an extremely high computational complexity and lacks an intrinsic zero point to determine relevant labels. In this paper, we propose a novel support vector machine for multi-label classification through both simplifying Rank-SVM and adding a zero label, resulting into a quadratic programming problem in which each class has an independent equality constraint. When Frank-Wolfe method is used to solve our quadratic programming problem iteratively, our entire linear programming problem of each step is divided into a series of sub-problems, which dramatically reduces computational cost. It is illustrated that for famous Yeast data set our training procedure runs about 12 times faster than Rank-SVM does under C++ environment. Experiments from five benchmark data sets show that our method is a powerful candidate for multi-label classification, compared with five state-of-the-art multi-label classification techniques.  相似文献   

14.
15.
基于最小二乘支持向量机变形,得到一个极其简单快速的分类器--直接支持向量机.与最小二乘支持向量机相比,该分类器只需直接求解一个更小规模矩阵的逆,大大减小了计算量,并未降低分类精度.从理论上证明了该矩阵可逆,保证了分类面存在的唯一性.对于线性情形,采用Sherman-Morrison-Woodbury公式降低可逆矩阵的维数,进一步减少了计算复杂度,使其可适用于更大规模的样本集.数值实验表明,新分类器可行并具有上述优势.  相似文献   

16.
We propose the use of bioluminescent whole cell biosensor combined with a pattern classification algorithm to automatically detect and identify β-lactam antibiotic substances. Escherichia coli cells with a plasmid harboring luxCDABE genes under the β-lactam sensitive promoter element are used as sensors. We present experimental measurements of light production of bioluminescent bacteria subject to 11 antibiotic substances. The patterns of measured light production are classified using a support vector machine classifier. The accuracy and reliability of the classification suggests that this method can be used in the future to probe for new antibiotic substances.  相似文献   

17.
为了加快并行下降方法(CD)用于线性支持向量机(SVM)时的最终收敛速度,将Rosenbrock算法(R)用于线性SVM.在内循环,R 通过解一个单变量子问题来更新狑的一个分量,并同时固定其他分量不变;在外循环,采用Gram-schmidt过程构建新的搜索方向.实验结果表明,与CD 相比,R 加快了最终的收敛,在分类中能更快地获得更高的测试精度.  相似文献   

18.
Recently, researchers are focusing more on the study of support vector machine (SVM) due to its useful applications in a number of areas, such as pattern recognition, multimedia, image processing and bioinformatics. One of the main research issues is how to improve the efficiency of the original SVM model, while preventing any deterioration of the classification performance of the model. In this paper, we propose a modified SVM based on the properties of support vectors and a pruning strategy to preserve support vectors, while eliminating redundant training vectors at the same time. The experiments on real images show that (1) our proposed approach can reduce the number of input training vectors, while preserving the support vectors, which leads to a significant reduction in the computational cost while attaining similar levels of accuracy. (2)The approach also works well when applied to image segmentation.  相似文献   

19.
A parallel randomized support vector machine (PRSVM) and a parallel randomized support vector regression (PRSVR) algorithm based on a randomized sampling technique are proposed in this paper. The proposed PRSVM and PRSVR have four major advantages over previous methods. (1) We prove that the proposed algorithms achieve an average convergence rate that is so far the fastest bounded convergence rate, among all SVM decomposition training algorithms to the best of our knowledge. The fast average convergence bound is achieved by a unique priority based sampling mechanism. (2) Unlike previous work (Provably fast training algorithm for support vector machines, 2001) the proposed algorithms work for general linear-nonseparable SVM and general non-linear SVR problems. This improvement is achieved by modeling new LP-type problems based on Karush–Kuhn–Tucker optimality conditions. (3) The proposed algorithms are the first parallel version of randomized sampling algorithms for SVM and SVR. Both the analytical convergence bound and the numerical results in a real application show that the proposed algorithm has good scalability. (4) We present demonstrations of the algorithms based on both synthetic data and data obtained from a real word application. Performance comparisons with SVMlight show that the proposed algorithms may be efficiently implemented.  相似文献   

20.
最大间隔最小体积球形支持向量机   总被引:8,自引:1,他引:8  
结合支持向量机(SVM)类间最大分类间隔和支持向量数据描述(SVDD)类内最小描述体积思想,提出一种新的学习机器模型———最大间隔最小体积球形支持向量机(MMHSVM).模型建立两个大小不一的同心超球,将正负类样本分别映射到小超球内和大超球外,模型目标函数最大化两超球间隔,实现正负类类间间隔的最大化和各类类内体积的最小化,提高了模型的分类能力.理论分析和实验结果表明该算法是有效的.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号

京公网安备 11010802026262号