首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
Multi-Domain Sentiment Classification with Classifier Combination   总被引:1,自引:0,他引:1       下载免费PDF全文
State-of-the-arts studies on sentiment classification are typically domain-dependent and domain-restricted.In this paper,we aim to reduce domain dependency and improve overall performance simultaneously by proposing an efficient multi-domain sentiment classification algorithm.Our method employs the approach of multiple classifier combination.In this approach,we first train single domain classifiers separately with domain specific data,and then combine the classifiers for the final decision.Our experiments show that this approach performs much better than both single domain classification approach(using the training data individually) and mixed domain classification approach(simply combining all the training data).In particular,classifier combination with weighted sum rule obtains an average error reduction of 27.6%over single domain classification.  相似文献   

2.
Alternating Feature Spaces in Relevance Feedback   总被引:1,自引:0,他引:1  
Image retrieval using relevance feedback can be treated as a two-class learning and classification process. The user-labelled relevant and irrelevant images are regarded as positive and negative training samples, based on which a classifier is trained dynamically. Then the classifier in turn classifies all images in the database. In practice, the number of training samples is very small because the users are often impatient. On the other hand, the positive samples usually are not representative since they are the nearest ones to the query and thus less informative. The insufficiency of training samples both in quantities and varieties constrains the generalization ability of the classifier significantly. In this paper, we propose a novel relevance feedback approach, which aims to collect more representative samples and hence improve the performance of classifier. Image labeling and classifier training are conducted in two complementary image feature spaces. Since the samples distribute differently in two spaces, the positive samples may be more informative in one feature space than in another. The two complementary feature spaces are alternated iteratively during the feedback process. To choose appropriate complementary feature spaces, we present two methods to measure the complementarities between two feature spaces quantitatively. Our experimental result on 10,000 images indicates that the proposed feedback approach significantly improves image retrieval performance.  相似文献   

3.
Various methods for ensembles selection and classifier combination have been designed to optimize the performance of ensembles of classifiers. However, use of large number of features in training data can affect the classification performance of machine learning algorithms. The objective of this paper is to represent a novel feature elimination (FE) based ensembles learning method which is an extension to an existing machine learning environment. Here the standard 12 lead ECG signal recordings data have been used in order to diagnose arrhythmia by classifying it into normal and abnormal subjects. The advantage of the proposed approach is that it reduces the size of feature space by way of using various feature elimination methods. The decisions obtained from these methods have been coalesced to form a fused data. Thus the idea behind this work is to discover a reduced feature space so that a classifier built using this tiny data set would perform no worse than a classifier built from the original data set. Random subspace based ensembles classifier is used with PART tree as base classifier. The proposed approach has been implemented and evaluated on the UCI ECG signal data. Here, the classification performance has been evaluated using measures such as mean absolute error, root mean squared error, relative absolute error, F-measure, classification accuracy, receiver operating characteristics and area under curve. In this way, the proposed novel approach has provided an attractive performance in terms of overall classification accuracy of 91.11 % on unseen test data set. From this work, it is shown that this approach performs well on the ensembles size of 15 and 20.  相似文献   

4.
Applying the classification approach in machine learning to medical field is a promising direction as it could potentially save a large amount of medical resources and reduce the impact of error-prone subjective diagnosis. However, low accuracy is currently the biggest challenge for classification. So far many approaches have been developed to improve the classification performance and most of them are focusing on how to extend the layers or the nodes in the Neural Network (NN), or combining a classifier with the domain knowledge of the medical field. These extensions may improve the classification performance. However, these classifiers trained on one datasets may not be able to adapt to another dataset. Meanwhile, the layers and the nodes of the neural network cannot be extended infinitely in practice. To overcome these problems, in this paper, we propose an innovative approach which is to employ the Auto-Encoder (AE) model to improve the classification performance. Specifically, we make the best of the compression capability of the Encoder to generate the latent compressed vector which can be used to represent the original samples. Then, we use a regular classifier to perform classification on those compressed vectors instead of the original data. In addition, we explore the classification performance on different extracted features by enumerating the number of hidden nodes which are used to save the extracted features. Comprehensive experiments are conducted to validate our proposed approach with the medical dataset of conjunctivitis and the STL-10 dataset. The results show that our proposed AE-based model can not only improve the classification accuracy but also be beneficial to solve the problem of False Positive Rate.  相似文献   

5.
Classification is a key problem in machine learning/data mining. Algorithms for classification have the ability to predict the class of a new instance after having been trained on data representing past experience in classifying instances. However, the presence of a large number of features in training data can hurt the classification capacity of a machine learning algorithm. The Feature Selection problem involves discovering a subset of features such that a classifier built only with this subset would attain predictive accuracy no worse than a classifier built from the entire set of features. Several algorithms have been proposed to solve this problem. In this paper we discuss how parallelism can be used to improve the performance of feature selection algorithms. In particular, we present, discuss and evaluate a coarse-grained parallel version of the feature selection algorithm FortalFS. This algorithm performs well compared with other solutions and it has certain characteristics that makes it a good candidate for parallelization. Our parallel design is based on the master--slave design pattern. Promising results show that this approach is able to achieve near optimum speedups in the context of Amdahl's Law.  相似文献   

6.
In this paper, we introduce a new adaptive rule-based classifier for multi-class classification of biological data, where several problems of classifying biological data are addressed: overfitting, noisy instances and class-imbalance data. It is well known that rules are interesting way for representing data in a human interpretable way. The proposed rule-based classifier combines the random subspace and boosting approaches with ensemble of decision trees to construct a set of classification rules without involving global optimisation. The classifier considers random subspace approach to avoid overfitting, boosting approach for classifying noisy instances and ensemble of decision trees to deal with class-imbalance problem. The classifier uses two popular classification techniques: decision tree and k-nearest-neighbor algorithms. Decision trees are used for evolving classification rules from the training data, while k-nearest-neighbor is used for analysing the misclassified instances and removing vagueness between the contradictory rules. It considers a series of k iterations to develop a set of classification rules from the training data and pays more attention to the misclassified instances in the next iteration by giving it a boosting flavour. This paper particularly focuses to come up with an optimal ensemble classifier that will help for improving the prediction accuracy of DNA variant identification and classification task. The performance of proposed classifier is tested with compared to well-approved existing machine learning and data mining algorithms on genomic data (148 Exome data sets) of Brugada syndrome and 10 real benchmark life sciences data sets from the UCI (University of California, Irvine) machine learning repository. The experimental results indicate that the proposed classifier has exemplary classification accuracy on different types of biological data. Overall, the proposed classifier offers good prediction accuracy to new DNA variants classification where noisy and misclassified variants are optimised to increase test performance.  相似文献   

7.
Breast cancer is the most common cancer among women. In CAD systems, several studies have investigated the use of wavelet transform as a multiresolution analysis tool for texture analysis and could be interpreted as inputs to a classifier. In classification, polynomial classifier has been used due to the advantages of providing only one model for optimal separation of classes and to consider this as the solution of the problem. In this paper, a system is proposed for texture analysis and classification of lesions in mammographic images. Multiresolution analysis features were extracted from the region of interest of a given image. These features were computed based on three different wavelet functions, Daubechies 8, Symlet 8 and bi-orthogonal 3.7. For classification, we used the polynomial classification algorithm to define the mammogram images as normal or abnormal. We also made a comparison with other artificial intelligence algorithms (Decision Tree, SVM, K-NN). A Receiver Operating Characteristics (ROC) curve is used to evaluate the performance of the proposed system. Our system is evaluated using 360 digitized mammograms from DDSM database and the result shows that the algorithm has an area under the ROC curve Az of 0.98 ± 0.03. The performance of the polynomial classifier has proved to be better in comparison to other classification algorithms.  相似文献   

8.
Identifying a discriminative feature can effectively improve the classification performance of aerial scene classification. Deep convolutional neural networks (DCNN) have been widely used in aerial scene classification for its learning discriminative feature ability. The DCNN feature can be more discriminative by optimizing the training loss function and using transfer learning methods. To enhance the discriminative power of a DCNN feature, the improved loss functions of pretraining models are combined with a softmax loss function and a centre loss function. To further improve performance, in this article, we propose hybrid DCNN features for aerial scene classification. First, we use DCNN models with joint loss functions and transfer learning from pretrained deep DCNN models. Second, the dense DCNN features are extracted, and the discriminative hybrid features are created using linear connection. Finally, an ensemble extreme learning machine (EELM) classifier is adopted for classification due to its general superiority and low computational cost. Experimental results based on the three public benchmark data sets demonstrate that the hybrid features obtained using the proposed approach and classified by the EELM classifier can result in remarkable performance.  相似文献   

9.
10.
11.
目的 由于图像检索中存在着低层特征和高层语义之间的“语义鸿沟”,图像自动标注成为当前的关键性问题.为缩减语义鸿沟,提出了一种混合生成式和判别式模型的图像自动标注方法.方法 在生成式学习阶段,采用连续的概率潜在语义分析模型对图像进行建模,可得到相应的模型参数和每幅图像的主题分布.将这个主题分布作为每幅图像的中间表示向量,那么图像自动标注的问题就转化为一个基于多标记学习的分类问题.在判别式学习阶段,使用构造集群分类器链的方法对图像的中间表示向量进行学习,在建立分类器链的同时也集成了标注关键词之间的上下文信息,因而能够取得更高的标注精度和更好的检索效果.结果 在两个基准数据集上进行的实验表明,本文方法在Corel5k数据集上的平均精度、平均召回率分别达到0.28和0.32,在IAPR-TC12数据集上则达到0.29和0.18,其性能优于大多数当前先进的图像自动标注方法.此外,从精度—召回率曲线上看,本文方法也优于几种典型的具有代表性的标注方法.结论 提出了一种基于混合学习策略的图像自动标注方法,集成了生成式模型和判别式模型各自的优点,并在图像语义检索的任务中表现出良好的有效性和鲁棒性.本文方法和技术不仅能应用于图像检索和识别的领域,经过适当的改进之后也能在跨媒体检索和数据挖掘领域发挥重要作用.  相似文献   

12.
This paper presents an effective machine learning-based depth selection algorithm for CTU (Coding Tree Unit) in HEVC (High Efficiency Video Coding). Existing machine learning methods are limited in their ability in handling the initial depth decision of CU (Coding Unit) and selecting the proper set of input features for the depth selection model. In this paper, we first propose a new classification approach for the initial division depth prediction. In particular, we study the correlation of the texture complexity, QPs (quantization parameters) and the depth decision of the CUs to forecast the original partition depth of the current CUs. Secondly, we further aim to determine the input features of the classifier by analysing the correlation between depth decision of the CUs, picture distortion and the bit-rate. Using the found relationships, we also study a decision method for the end partition depth of the current CUs using bit-rate and picture distortion as input. Finally, we formulate the depth division of the CUs as a binary classification problem and use the nearest neighbor classifier to conduct classification. Our proposed method can significantly improve the efficiency of inter-frame coding by circumventing the traversing cost of the division depth. It shows that the mentioned method can reduce the time spent by 34.56% compared to HM-16.9 while keeping the partition depth of the CUs correct.  相似文献   

13.
Boost learning algorithm, such as AdaBoost, has been widely used in a variety of applications in multimedia and computer vision. Relevance feedback-based image retrieval has been formulated as a classification problem with a small number of training samples. Several machine learning techniques have been applied to this problem recently. In this paper, we propose a novel paired feature AdaBoost learning system for relevance feedback-based image retrieval. To facilitate density estimation in our feature learning method, we propose an ID3-like balance tree quantization method to preserve most discriminative information. By using paired feature combination, we map all training samples obtained in the relevance feedback process onto paired feature spaces and employ the AdaBoost algorithm to select a few feature pairs with best discrimination capabilities in the corresponding paired feature spaces. In the AdaBoost algorithm, we employ Bayesian classification to replace the traditional binary weak classifiers to enhance their classification power, thus producing a stronger classifier. Experimental results on content-based image retrieval (CBIR) show superior performance of the proposed system compared to some previous methods.  相似文献   

14.
集成分类通过将若干个弱分类器依据某种规则进行组合,能有效改善分类性能。在组合过程中,各个弱分类器对分类结果的重要程度往往不一样。极限学习机是最近提出的一个新的训练单隐层前馈神经网络的学习算法。以极限学习机为基分类器,提出了一个基于差分进化的极限学习机加权集成方法。提出的方法通过差分进化算法来优化集成方法中各个基分类器的权值。实验结果表明,该方法与基于简单投票集成方法和基于Adaboost集成方法相比,具有较高的分类准确性和较好的泛化能力。  相似文献   

15.
Term frequency–inverse document frequency (TF–IDF), one of the most popular feature (also called term or word) weighting methods used to describe documents in the vector space model and the applications related to text mining and information retrieval, can effectively reflect the importance of the term in the collection of documents, in which all documents play the same roles. But, TF–IDF does not take into account the difference of term IDF weighting if the documents play different roles in the collection of documents, such as positive and negative training set in text classification. In view of the aforementioned text, this paper presents a novel TF–IDF‐improved feature weighting approach, which reflects the importance of the term in the positive and the negative training examples, respectively. We also build a weighted voting classifier by iteratively applying the support vector machine algorithm and implement one‐class support vector machine and Positive Example Based Learning methods used for comparison. During classifying, an improved 1‐DNF algorithm, called 1‐DNFC, is also adopted, aiming at identifying more reliable negative documents from the unlabeled examples set. The experimental results show that the performance of term frequency inverse positive–negative document frequency‐based classifier outperforms that of TF–IDF‐based one, and the performance of weighted voting classifier also exceeds that of one‐class support vector machine‐based classifier and Positive Example Based Learning‐based classifier. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
In this paper, we propose a novel ECG arrhythmia classification method using power spectral-based features and support vector machine (SVM) classifier. The method extracts electrocardiogram’s spectral and three timing interval features. Non-parametric power spectral density (PSD) estimation methods are used to extract spectral features. The proposed approach optimizes the relevant parameters of SVM classifier through an intelligent algorithm using particle swarm optimization (PSO). These parameters are: Gaussian radial basis function (GRBF) kernel parameter σ and C penalty parameter of SVM classifier. ECG records from the MIT-BIH arrhythmia database are selected as test data. It is observed that the proposed power spectral-based hybrid particle swarm optimization-support vector machine (SVMPSO) classification method offers significantly improved performance over the SVM which has constant and manually extracted parameter.  相似文献   

17.
Improving accuracies of machine learning algorithms is vital in designing high performance computer-aided diagnosis (CADx) systems. Researches have shown that a base classifier performance might be enhanced by ensemble classification strategies. In this study, we construct rotation forest (RF) ensemble classifiers of 30 machine learning algorithms to evaluate their classification performances using Parkinson's, diabetes and heart diseases from literature.While making experiments, first the feature dimension of three datasets is reduced using correlation based feature selection (CFS) algorithm. Second, classification performances of 30 machine learning algorithms are calculated for three datasets. Third, 30 classifier ensembles are constructed based on RF algorithm to assess performances of respective classifiers with the same disease data. All the experiments are carried out with leave-one-out validation strategy and the performances of the 60 algorithms are evaluated using three metrics; classification accuracy (ACC), kappa error (KE) and area under the receiver operating characteristic (ROC) curve (AUC).Base classifiers succeeded 72.15%, 77.52% and 84.43% average accuracies for diabetes, heart and Parkinson's datasets, respectively. As for RF classifier ensembles, they produced average accuracies of 74.47%, 80.49% and 87.13% for respective diseases.RF, a newly proposed classifier ensemble algorithm, might be used to improve accuracy of miscellaneous machine learning algorithms to design advanced CADx systems.  相似文献   

18.
In this paper, we present a fast learning fully complex-valued extreme learning machine classifier, referred to as ‘Circular Complex-valued Extreme Learning Machine (CC-ELM)’ for handling real-valued classification problems. CC-ELM is a single hidden layer network with non-linear input and hidden layers and a linear output layer. A circular transformation with a translational/rotational bias term that performs a one-to-one transformation of real-valued features to the complex plane is used as an activation function for the input neurons. The neurons in the hidden layer employ a fully complex-valued Gaussian-like (‘sech’) activation function. The input parameters of CC-ELM are chosen randomly and the output weights are computed analytically. This paper also presents an analytical proof to show that the decision boundaries of a single complex-valued neuron at the hidden and output layers of CC-ELM consist of two hyper-surfaces that intersect orthogonally. These orthogonal boundaries and the input circular transformation help CC-ELM to perform real-valued classification tasks efficiently.Performance of CC-ELM is evaluated using a set of benchmark real-valued classification problems from the University of California, Irvine machine learning repository. Finally, the performance of CC-ELM is compared with existing methods on two practical problems, viz., the acoustic emission signal classification problem and a mammogram classification problem. These study results show that CC-ELM performs better than other existing (both) real-valued and complex-valued classifiers, especially when the data sets are highly unbalanced.  相似文献   

19.
As of this writing, there exists a large variety of recently developed pattern classification methods coming from the domain of machine learning and artificial intelligence. In this paper, we study the performance of a recently developed and improved classifier that integrates fuzzy set theory in a neural network (NEFCLASS). The performance of NEFCLASS is compared to a well‐known classification technique from machine learning (C4.5). Both C4.5 and NEFCLASS will be evaluated on a collection of benchmarking data sets. Further, to boost performance of NEFCLASS, we investigate the advantage of preprocessing the algorithm by means of an exploratory factor analysis. We compare the algorithms before and after applying an exploratory factor analysis on leading performance indicators, as there are the accuracy of the created classifier and the magnitude of the associated rule base. © 2000 John Wiley & Sons, Inc.  相似文献   

20.
Mammographic density is known to be an important indicator of breast cancer risk. Classification of mammographic density based on statistical features has been investigated previously. However, in those approaches the entire breast including the pectoral muscle has been processed to extract features. In this approach the region of interest is restricted to the breast tissue alone eliminating the artifacts, background and the pectoral muscle. The mammogram images used in this study are from the Mini-MIAS digital database. Here, we describe the development of an automatic breast tissue classification methodology, which can be summarized in a number of distinct steps: (1) preprocessing, (2) feature extraction, and (3) classification. Gray level thresholding and connected component labeling is used to eliminate the artifacts and pectoral muscles from the region of interest. Statistical features are extracted from this region which signify the important texture features of breast tissue. These features are fed to the support vector machine (SVM) classifier to classify it into any of the three classes namely fatty, glandular and dense tissue.The classifier accuracy obtained is 95.44%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号