首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Normal support vector machine (SVM) is not suitable for classification of large data sets because of high training complexity. Convex hull can simplify the SVM training. However, the classification accuracy becomes lower when there exist inseparable points. This paper introduces a novel method for SVM classification, called convex–concave hull SVM (CCH-SVM). After grid processing, the convex hull is used to find extreme points. Then, we use Jarvis march method to determine the concave (non-convex) hull for the inseparable points. Finally, the vertices of the convex–concave hull are applied for SVM training. The proposed CCH-SVM classifier has distinctive advantages on dealing with large data sets. We apply the proposed method on several benchmark problems. Experimental results demonstrate that our approach has good classification accuracy while the training is significantly faster than other SVM classifiers. Compared with the other convex hull SVM methods, the classification accuracy is higher.  相似文献   

2.
A novel approach to the problem of detecting and classifying underwater bottom mine objects in littoral environments from acoustic backscattered signals is considered. We begin by defining robust short-time Fourier transform to convert the received echo into a time–frequency (TF) plane. Identify interest local region in spectrogram, then features in TF plane with robustness to reverberation and noise disturbances are built. Finally, echo features are sent to a relevance vector machine (RVM) classifier that represents a Bayesian extension of support vector machine (SVM). To evaluate the performace of the classifier based on this approach, the classification experiment of two typical types of mines lying on the bottom have been performed with a broad bandwidth active sonar. Each of the targets was lying on the lake bottom at a depth of 20 m. The case study exploits the robustness of a feature extraction scheme, and furthermore, RVM yields a much sparser solution and improves the classification accuracy than SVM in an impulse noise environment.  相似文献   

3.
In this paper, we propose a novel nonparallel hyperplane classifier, named ν-nonparallel support vector machine (ν-NPSVM), for binary classification. Based on our recently proposed method, i.e., nonparallel support vector machine (NPSVM), which has been proved superior to the twin support vector machines, ν-NPSVM is parameterized by the quantity ν to let ones effectively control the number of support vectors. By combining the ν-support vector classification and the ν-support vector regression together to construct the primal problems, ν-NPSVM inherits the advantages of ν-support vector machine so that enables us to eliminate one of the other free parameters of the NPSVM: the accuracy parameter ε and the regularization constant C. We describe the algorithm, give some theoretical results concerning the meaning and the choice of ν, and also report the experimental results on lots of data sets to show the effectiveness of our method.  相似文献   

4.
This paper proves the problem of losing incremental samples’ information of the present SVM incremental learning algorithm from both theoretic and experimental aspects, and proposes a new incremental learning algorithm with support vector machine based on hyperplane-distance. According to the geometric character of support vector, the algorithm uses Hyperplane-Distance to extract the samples, selects samples which are most likely to become support vector to form the vector set of edge, and conducts the support vector machine training on the vector set. This method reduces the number of training samples and effectively improves training speed of incremental learning. The results of experiment performed on Chinese webpage classification show that this algorithm can reduce the number of training samples effectively and accumulate historical information. The HD-SVM algorithm has higher training speed and better precision of classification.  相似文献   

5.
A novel ν-twin support vector machine with Universum data (\(\mathfrak {U}_{\nu }\)-TSVM) is proposed in this paper. \(\mathfrak {U}_{\nu }\)-TSVM allows to incorporate the prior knowledge embedded in the unlabeled samples into the supervised learning. It aims to utilize these prior knowledge to improve the generalization performance. Different from the conventional \(\mathfrak {U}\)-SVM, \(\mathfrak {U}_{\nu }\)-TSVM employs two Hinge loss functions to make the Universum data lie in a nonparallel insensitive loss tube, which makes it exploit these prior knowledge more flexibly. In addition, the newly introduced parameters ν1, ν2 in the \(\mathfrak {U}_{\nu }\)-TSVM have better theoretical interpretation than the penalty factor c in the \(\mathfrak {U}\)-TSVM. Numerical experiments on seventeen benchmark datasets, handwritten digit recognition, and gender classification indicate that the Universum indeed contributes to improving the prediction accuracy. Moreover, our \(\mathfrak {U}_{\nu }\)-TSVM is far superior to the other three algorithms (\(\mathfrak {U}\)-SVM, ν-TSVM and \(\mathfrak {U}\)-TSVM) from the prediction accuracy.  相似文献   

6.
Since approximately 90% of the people with PD (Parkinson’s disease) suffer from speech disorders including disorders of laryngeal, respiratory and articulatory function, using voice analysis disease can be diagnosed remotely at an early stage with more reliability and in an economic way. All previous works are done to distinguish healthy people from people with Parkinson’s disease (PWP). In this paper, we propose to go further by multiclass classification with three classes of Parkinson stages and healthy control. So we have used 40 features dataset, all the features are analyzed and 9 features are selected to classify PWP subjects in four classes, based on unified Parkinson’s disease Rating Scale (UPDRS). Various classifiers are used and their comparison is done to find out which one gives the best results. Results show that the subspace discriminant reach more than 93% overall classification accuracy.  相似文献   

7.
Twin support vector machine (TSVM) is regarded as a milestone in the development of the powerful SVM. It finds two nonparallel planes by resolving a pair of smaller-sized quadratic programming problems rather than a single large one, which makes the learning speed of TSVM approximately four times faster than that of the standard SVM. However, the empirical risk minimization principle is implemented in the TSVM, so it easily leads to the over-fitting problem and reduces the prediction accuracy of the classifier. ν-TSVM, as a variant of TSVM, also implements the empirical risk minimization principle. To enhance the generalization ability of the classifier, we propose an improved ν-TSVM by introducing a regularization term into the objective function, so there are two parts in the objective function, one of which is to maximize the margin between the two parallel hyper-planes, and the other one is to minimize the training errors of two classes of samples. Therefore the structural risk minimization principle is implemented in our improved ν-TSVM. Numerical experiments on one artificial dataset and nine benchmark datasets show that our improved ν-TSVM yields better generalization performance than SVM, ν-SVM, and ν-TSVM. Moreover, numerical experiments with different proportions of outliers demonstrate that our improved ν-TSVM is robust and stable. Finally, we apply our improved ν-TSVM to two BCI competition datasets, and also obtain better prediction accuracy.  相似文献   

8.
Twin support vector machine (TSVM) is a new machine learning algorithm, which aims at finding two nonparallel planes for each class. In order to do so, one needs to resolve a pair of smaller-sized quadratic programming problems (QPPs) rather than a single large one. However, when constructing the classification plane for one class, a large number of samples of this class are considered in the objective function, but only fewer samples in the other class are considered, which easily results in over-fitting problem. In addition, the same penalties are given to each misclassified samples in the TSVM. In fact, the misclassified samples have different effects on the decision of the hyper-plane. In order to overcome these two disadvantages, by introducing the rough set theory into ν-TSVM, we propose a rough margin-based ν-TSVM in this paper. In the proposed algorithm, the different points in the different positions are proposed to have different effects on the separating hyper-plane. We firstly construct rough lower margin, rough upper margin, and rough boundary in the ν-TSVM and then give the different penalties to the different misclassified samples according to their positions. The new classifier can avoid the over-fitting problem to a certain extent. Numerical experiments on one artificial dataset and six benchmark datasets demonstrate the feasibility and validity of the proposed algorithm.  相似文献   

9.
An ε-twin support vector machine for regression   总被引:1,自引:1,他引:0  
A special class of recurrent neural network termed Zhang neural network (ZNN) depicted in the implicit dynamics has recently been introduced for online solution of time-varying convex quadratic programming (QP) problems. Global exponential convergence of such a ZNN model is achieved theoretically in an error-free situation. This paper investigates the performance analysis of the perturbed ZNN model using a special type of activation functions (namely, power-sum activation functions) when solving the time-varying QP problems. Robustness analysis and simulation results demonstrate the superior characteristics of using power-sum activation functions in the context of large ZNN-implementation errors, compared with the case of using linear activation functions. Furthermore, the application to inverse kinematic control of a redundant robot arm also verifies the feasibility and effectiveness of the ZNN model for time-varying QP problems solving.  相似文献   

10.
Multimedia Tools and Applications - Spam tweets might cause numerous problems for users. An automatic method is introduced as a proposed method to detect spam tweets. This method is based on...  相似文献   

11.
The Journal of Supercomputing - In this paper, a novel gene selection benefiting from feature clustering and feature discretization is developed. In large numbers of genes, unsupervised fuzzy...  相似文献   

12.
Fingerprint classification reduces the number of possible matches in automated fingerprint identification systems by categorizing fingerprints into predefined classes. Support vector machines (SVMs) are widely used in pattern classification and have produced high accuracy when performing fingerprint classification. In order to effectively apply SVMs to multi-class fingerprint classification systems, we propose a novel method in which the SVMs are generated with the one-vs-all (OVA) scheme and dynamically ordered with na?¨ve Bayes classifiers. This is necessary to break the ties that frequently occur when working with multi-class classification systems that use OVA SVMs. More specifically, it uses representative fingerprint features as the FingerCode, singularities and pseudo ridges to train the OVA SVMs and na?¨ve Bayes classifiers. The proposed method has been validated on the NIST-4 database and produced a classification accuracy of 90.8% for five-class classification with the statistical significance. The results show the benefits of integrating different fingerprint features as well as the usefulness of the proposed method in multi-class fingerprint classification.  相似文献   

13.
A multi-criteria feature selection method-sequential multi-criteria feature selection algorithm (SMCFS) has been proposed for the applications with high precision and low time cost. By combining the consistency and otherness of different evaluation criteria, the SMCFS adopts more than one evaluation criteria sequentially to improve the efficiency of feature selection. With one novel agent genetic algorithm (chain-like agent GA), the SMCFS can obtain high precision of feature selection and low time cost that is similar as filter method with single evaluation criterion. Several groups of experiments are carried out for comparison to demonstrate the performance of SMCFS. SMCFS is compared with different feature selection methods using three datasets from UCI database. The experimental results show that the SMCFS can get low time cost and high precision of feature selection, and is very suitable for this kind of applications of feature selection.  相似文献   

14.
Land use classification is an important part of many remote sensing applications. A lot of research has gone into the application of statistical and neural network classifiers to remote‐sensing images. This research involves the study and implementation of a new pattern recognition technique introduced within the framework of statistical learning theory called Support Vector Machines (SVMs), and its application to remote‐sensing image classification. Standard classifiers such as Artificial Neural Network (ANN) need a number of training samples that exponentially increase with the dimension of the input feature space. With a limited number of training samples, the classification rate thus decreases as the dimensionality increases. SVMs are independent of the dimensionality of feature space as the main idea behind this classification technique is to separate the classes with a surface that maximizes the margin between them, using boundary pixels to create the decision surface. Results from SVMs are compared with traditional Maximum Likelihood Classification (MLC) and an ANN classifier. The findings suggest that the ANN and SVM classifiers perform better than the traditional MLC. The SVM and the ANN show comparable results. However, accuracy is dependent on factors such as the number of hidden nodes (in the case of ANN) and kernel parameters (in the case of SVM). The training time taken by the SVM is several magnitudes less.  相似文献   

15.
16.
Choosing optimal parameters for support vector regression (SVR) is an important step in SVR. design, which strongly affects the pefformance of SVR. In this paper, based on the analysis of influence of SVR parameters on generalization error, a new approach with two steps is proposed for selecting SVR parameters, First the kernel function and SVM parameters are optimized roughly through genetic algorithm, then the kernel parameter is finely adjusted by local linear search, This approach has been successfully applied to the prediction model of the sulfur content in hot metal. The experiment results show that the proposed approach can yield better generalization performance of SVR than other methods,  相似文献   

17.
Based on the mechanisms of immunodominance and clonal selection theory, we propose a new multiobjective optimization algorithm, immune dominance clonal multiobjective algorithm (IDCMA). IDCMA is unique in that its fitness values of current dominated individuals are assigned as the values of a custom distance measure, termed as Ab-Ab affinity, between the dominated individuals and one of the nondominated individuals found so far. According to the values of Ab-Ab affinity, all dominated individuals (antibodies) are divided into two kinds, subdominant antibodies and cryptic antibodies. Moreover, local search only applies to the subdominant antibodies, while the cryptic antibodies are redundant and have no function during local search, but they can become subdominant (active) antibodies during the subsequent evolution. Furthermore, a new immune operation, clonal proliferation is provided to enhance local search. Using the clonal proliferation operation, IDCMA reproduces individuals and selects their improved maturated progenies after local search, so single individuals can exploit their surrounding space effectively and the newcomers yield a broader exploration of the search space. The performance comparison of IDCMA with MISA, NSGA-Ⅱ, SPEA, PAES, NSGA, VEGA, NPGA, and HLGA in solving six well-known multiobjective function optimization problems and nine multiobjective 0/1 knapsack problems shows that IDCMA has a good performance in converging to approximate Pareto-optimal fronts with a good distribution.  相似文献   

18.
Twin support vector regression (TSVR) was proposed recently as a novel regressor that tries to find a pair of nonparallel planes, i.e. \(\epsilon \) -insensitive up- and down-bounds, by solving two related SVM-type problems. Though TSVR exhibits good performance compared with conventional methods like SVR, it suffers from the following issues: (1) it lacks model complexity control and thus may incur overfitting and suboptimal solution; (2) it needs to solve a pair of quadratic programming problems which are relatively complex to implement; (3) it is sensitive to outliers; and (4) its solution is not sparse. To address these problems, we propose in this paper a novel regression algorithm termed as robust and sparse twin support vector regression. The central idea is to reformulate TSVR as a convex problem by introducing regularization technique first and then derive a linear programming (LP) formulation which is not only simple but also allows robustness and sparseness. Instead of solving the resulting LP problem in the primal, we present a Newton algorithm with Armijo step-size to resolve the corresponding exact exterior penalty problem. The experimental results on several publicly available benchmark data sets show the feasibility and effectiveness of the proposed method.  相似文献   

19.
The objective of this paper is to investigate how a Danger Theory based Artificial Immune System—in particular the Dendritic Cell Algorithm (DCA) can detect an attack on a sensor network. The method is validated using two separate implementations: a simulation using J-sim and an implementation for the T-mote Sky sensor using TinyOS. This paper also introduces a new sensor network attack called an Interest Cache Poisoning Attack and investigates how the DCA can be applied to detect this attack in a series of experiments.  相似文献   

20.
An iterative algorithm is suited to reconstruct CT images from noisy or truncated projection data. However, as a disadvantage, the algorithm requires significant computational time. Although a parallel technique can be used to reduce the computational time, a large amount of communication overhead becomes an obstacle to its performance (Li et al. in J. X-Ray Sci. Technol. 13:1–10, 2005). To overcome this problem, we proposed an innovative parallel method based on the local iterative CT reconstruction algorithm (Wang et al. in Scanning 18:582–588, 1996 and IEEE Trans. Med. Imaging 15(5):657–664, 1996). The object to be reconstructed is partitioned into a number of subregions and assigned to different processing elements (PEs). Within each PE, local iterative reconstruction is performed to recover the subregion. Several numerical experiments were conducted on a high performance computing cluster. And the FORBILD head phantom (Lauritsch and Bruder ) was used as benchmark to measure the parallel performance. The experimental results showed that the proposed parallel algorithm significantly reduces the reconstruction time, hence achieving a high speedup and efficiency.
Jun NiEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号