首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 243 毫秒
1.
基于流形学习的多示例回归算法   总被引:2,自引:0,他引:2  
詹德川  周志华 《计算机学报》2006,29(11):1948-1955
多示例学习是一种新型机器学习框架,以往的研究主要集中在多示例分类上,最近多示例回归受到了国际机器学习界的关注.流形学习旨在获得非线性分布数据的内在结构,可以用于非线性降维.文中基于流形学习技术,提出了用于解决多示例同归问题的Mani MIL算法.该算法首先对训练包中的示例降维,利用降维结果出现坍缩的特性对多示例包进行预测.实验表明,Mani MIL算法比现有的多示例算法例如Citation-kNN等有更好的性能.  相似文献   

2.
标签比例学习(LLP)是一种将实例放入包中的机器学习方法,它只提供包中的实例信息和标签比例信息,而不提供标签信息。针对多个相关任务的LLP问题,提出了一种基于迁移学习的标签比例集成学习模型,简称AT-LLP,该模型通过在任务之间构建共享参数来连接相关任务,将源任务中学习到的知识迁移到目标任务中,从而提高目标任务的学习效率。同时该算法引入了集成学习算法,在分类器多轮迭代的学习过程中,不断调整训练集的权重系数,进一步将弱分类器训练为强分类器。实验表明,所提AT-LLP模型比现有LLP方法具有更好的性能。  相似文献   

3.
基于K均值聚类和多示例学习的图像检索方法   总被引:1,自引:0,他引:1  
温超  耿国华  李展 《计算机应用》2011,31(6):1546-1548
针对基于对象的图像检索问题,利用K均值(K-means)聚类,提出了一种新的基于多示例学习(MIL)框架的图像检索算法KP-MIL。该算法在正包和负包组成示例集合聚类,获取潜在正示例代表和包结构特性数据,然后利用径向基核分别度量两者的相似性,最后利用alpha因子均衡两者相似性对核函数结果的影响。在标准对象图像检索集SIGVAL上进行实验,实验结果表明,该方法是有效的且性能优于其他同类方法。  相似文献   

4.
多示例多标签学习框架是一种针对解决多义性问题而提出的新型机器学习框架,在多示例多标签学习框架中,一个对象是用一组示例集合来表示,并且和一组类别标签相关联。E-MIMLSVM~+算法是多示例多标签学习框架中利用退化思想的经典分类算法,针对其无法利用无标签样本进行学习从而造成泛化能力差等问题,使用半监督支持向量机对该算法进行改进。改进后的算法可以利用少量有标签样本和大量没有标签的样本进行学习,有助于发现样本集内部隐藏的结构信息,了解样本集的真实分布情况。通过对比实验可以看出,改进后的算法有效提高了分类器的泛化性能。  相似文献   

5.
集成模糊LSA与MIL的图像分类算法   总被引:1,自引:0,他引:1  
针对自然图像的分类问题,提出一种基于模糊潜在语义分析(LSA)与直推式支持向量机(TSVM)相结合的半监督多示例学习(MIL)算法.该算法将图像当作多示例包,分割区域的底层视觉特征当作包中的示例.为了将MIL问题转化成单示例问题进行求解,首先利用K-Means方法对训练包中所有的示例进行聚类,建立"视觉词汇表";然后根据"视觉字"与示例之间的距离定义模糊隶属度函数,建立模糊"词-文档"矩阵,再采用LSA方法获得多示例包(图像)的模糊潜在语义模型,并通过该模型将每个多示例包转化成单个样本;采用半监督的TSVM训练分类器,以利用未标注图像来提高分类精度.基于Corel图像库的对比实验结果表明,与传统的LSA方法相比,模糊LSA的分类准确率提高了5.6%,且性能优于其他分类方法.  相似文献   

6.
多示例多标签学习是一种新型的机器学习框架。在多示例多标签学习中,样本以包的形式存在,一个包由多个示例组成,并被标记多个标签。以往的多示例多标签学习研究中,通常认为包中的示例是独立同分布的,但这个假设在实际应用中是很难保证的。为了利用包中示例的相关性特征,提出了一种基于示例非独立同分布的多示例多标签分类算法。该算法首先通过建立相关性矩阵表示出包内示例的相关关系,每个多示例包由一个相关性矩阵表示;然后建立基于不同尺度的相关性矩阵的核函数;最后考虑到不同标签的预测对应不同的核函数,引入多核学习构造并训练针对不同标签预测的多核SVM分类器。图像和文本数据集上的实验结果表明,该算法大大提高了多标签分类的准确性。  相似文献   

7.
邓波  陆颖隽  王如志 《计算机科学》2017,44(3):264-267, 287
在多示例学习(MIL)中,包是含有多个示例的集合,训练样本只给出包的标记,而没有给出单个示例的标记。提出一种基于示例标记强度的MIL方法(ILI-MIL),其允许示例标记强度为任何实数。考虑到基于梯度训练神经网络方法的计算复杂性和ILI-MIL目标函数的复杂性,利用基于化学反应优化的高阶神经网络来实现ILI-MIL,学习方法具有较强的非线性表达能力和较高的计算效率。实验结果表明,该算法比已有算法具有更加有效的分类能力,且适应范围更广。  相似文献   

8.
在多示例学习框架下,训练数据集由若干个包组成,包内含有多个用属性-值对形式表示的示例,系统对包内的多个示例进行学习。传统的基于多示例学习的局部离群点检测算法将多示例学习框架运用到数据集上,将多示例问题转化为单示例问题进行处理。但在示例包的转换过程中采用示例内部的特征长度所占比作为权重机制,并没有考察对结果影响较大的示例,分析原因或者动态调整其权重,从而对离群点检测的效果造成影响。针对这一问题,为了充分适应数据内部的分布特征,提出了一种基于多示例学习的局部离群点改进算法FWMIL-LOF。算法采用MIL(Multi-Instance Learning)框架,在示例包的转换过程中引入描述数据重要度的权重函数,通过定义惩罚策略对权重函数做相应调整,从而确定了不同特征属性的示例在所属包中的权重。在实际企业的实时采集监控系统中,通过仿真分析,并与其他经典局部离群点检测算法进行对比,验证了改进算法在离群点检测效果方面的提高。  相似文献   

9.
多示例学习中,包空间特征描述包容易忽略包中的局部信息,示例空间特征描述包容易忽略包的整体结构信息.针对上述问题,提出融合包空间特征和示例空间特征的多示例学习方法.首先建立图模型表达包中示例之间的关系,将图模型转化为关联矩阵以构建包空间特征;其次筛选出正包中与正包的类别的相关性比较强的示例和负包中与正包的类别的相关性比较弱的示例,将示例特征分别作为正包和负包的示例空间特征;最后用Gaussian RBF核将包空间和示例空间特征映射到相同的特征空间,采用基于权重的特征融合方法进行特征融合.在多示例的基准数据集、公开的图像数据集和文本数据集上进行实验的结果表明,该方法提高了分类效果.  相似文献   

10.
弱监督异常事件检测是一项极富挑战性的任务,其目标是在已知正常和异常视频标签的监督下,定位出异常发生的具体时序区间.文中采用多示例排序网络来实现弱监督异常事件检测任务,该框架在视频被切分为固定数量的片段后,将一个视频抽象为一个包,每个片段相当于包中的示例,多示例学习在已知包类别的前提下训练示例分类器.由于视频有丰富的时序信息,因此重点关注监控视频在线检测的时序关系.从全局和局部角度出发,采用自注意力模块学习出每个示例的权重,通过自注意力值与示例异常得分的线性加权,来获得视频整体的异常分数,并采用均方误差损失训练自注意力模块.另外,引入LSTM和时序卷积两种方式对时序建模,其中时序卷积又分为单一类别的时序空洞卷积和融合了不同空洞率的多尺度的金字塔时序空洞卷积.实验结果显示,多尺度的时序卷积优于单一类别的时序卷积,时序卷积联合包内包外互补损失的方法在当前UCF-Crime数据集上比不包含时序模块的基线方法的AUC指标高出了3.2%.  相似文献   

11.
针对许多多示例算法都对正包中的示例情况做出假设的问题,提出了结合模糊聚类的多示例集成算法(ISFC).结合模糊聚类和多示例学习中负包的特点,提出了"正得分"的概念,用于衡量示例标签为正的可能性,降低了多示例学习中示例标签的歧义性;考虑到多示例学习中将负示例分类错误的代价更大,设计了一种包的代表示例选择策略,选出的代表示...  相似文献   

12.
In multi-instance learning, the training set comprises labeled bags that are composed of unlabeled instances, and the task is to predict the labels of unseen bags. This paper studies multi-instance learning from the view of supervised learning. First, by analyzing some representative learning algorithms, this paper shows that multi-instance learners can be derived from supervised learners by shifting their focuses from the discrimination on the instances to the discrimination on the bags. Second, considering that ensemble learning paradigms can effectively enhance supervised learners, this paper proposes to build multi-instance ensembles to solve multi-instance problems. Experiments on a real-world benchmark test show that ensemble learning paradigms can significantly enhance multi-instance learners.  相似文献   

13.
In multi-instance learning, the training examples are bags composed of instances without labels, and the task is to predict the labels of unseen bags through analyzing the training bags with known labels. A bag is positive if it contains at least one positive instance, while it is negative if it contains no positive instance. In this paper, a neural network based multi-instance learning algorithm named RBF-MIP is presented, which is derived from the popular radial basis function (RBF) methods. Briefly, the first layer of an RBF-MIP neural network is composed of clusters of bags formed by merging training bags agglomeratively, where Hausdorff metric is utilized to measure distances between bags and between clusters. Weights of second layer of the RBF-MIP neural network are optimized by minimizing a sum-of-squares error function and worked out through singular value decomposition (SVD). Experiments on real-world multi-instance benchmark data, artificial multi-instance benchmark data and natural scene image database retrieval are carried out. The experimental results show that RBF-MIP is among the several best learning algorithms on multi-instance problems.  相似文献   

14.
In multiple-instance learning (MIL), an individual example is called an instance and a bag contains a single or multiple instances. The class labels available in the training set are associated with bags rather than instances. A bag is labeled positive if at least one of its instances is positive; otherwise, the bag is labeled negative. Since a positive bag may contain some negative instances in addition to one or more positive instances, the true labels for the instances in a positive bag may or may not be the same as the corresponding bag label and, consequently, the instance labels are inherently ambiguous. In this paper, we propose a very efficient and robust MIL method, called Multiple-Instance Learning via Disambiguation (MILD), for general MIL problems. First, we propose a novel disambiguation method to identify the true positive instances in the positive bags. Second, we propose two feature representation schemes, one for instance-level classification and the other for bag-level classification, to convert the MIL problem into a standard single-instance learning (SIL) problem that can be solved by well-known SIL algorithms, such as support vector machine. Third, an inductive semi-supervised learning method is proposed for MIL. We evaluate our methods extensively on several challenging MIL applications to demonstrate their promising efficiency, robustness, and accuracy.  相似文献   

15.
在多示例学习中引入利用未标记示例的机制,能降低训练的成本并提高学习器的泛化能力。当前半监督多示例学习算法大部分是基于对包中的每一个示例进行标记,把多示例学习转化为一个单示例半监督学习问题。考虑到包的类标记由包中示例及包的结构决定,提出一种直接在包层次上进行半监督学习的多示例学习算法。通过定义多示例核,利用所有包(有标记和未标记)计算包层次的图拉普拉斯矩阵,作为优化目标中的光滑性惩罚项。在多示例核所张成的RKHS空间中寻找最优解被归结为确定一个经过未标记数据修改的多示例核函数,它能直接用在经典的核学习方法上。在实验数据集上对算法进行了测试,并和已有的算法进行了比较。实验结果表明,基于半监督多示例核的算法能够使用更少量的训练数据而达到与监督学习算法同样的精度,在有标记数据集相同的情况下利用未标记数据能有效地提高学习器的泛化能力。  相似文献   

16.
In active learning, the learner is required to measure the importance of unlabeled samples in a large dataset and select the best one iteratively. This sample selection process could be treated as a decision making problem, which evaluates, ranks, and makes choices from a finite set of alternatives. In many decision making problems, it usually applied multiple criteria since the performance is better than using a single criterion. Motivated by these facts, an active learning model based on multi-criteria decision making (MCMD) is proposed in this paper. After the investigation between any two unlabeled samples, a preference preorder is determined for each criterion. The dominated index and the dominating index are then defined and calculated to evaluate the informativeness of unlabeled samples, which provide an effective metric measure for sample selection. On the other hand, under multiple-instance learning (MIL) environment, the instances/samples are grouped into bags, a bag is negative only if all of its instances are negative, and is positive otherwise. Multiple-instance active learning (MIAL) aims to select and label the most informative bags from numerous unlabeled ones, and learn a MIL classifier for accurately predicting unseen bags by requesting as few labels as possible. It adopts a MIL algorithm as the base classifier, and follows an active learning procedure. In order to achieve a balance between learning efficiency and generalization capability, the proposed active learning model is restricted to a specific algorithm under MIL environment. Experimental results demonstrate the effectiveness of the proposed method.  相似文献   

17.
Multi-instance clustering with applications to multi-instance prediction   总被引:2,自引:0,他引:2  
In the setting of multi-instance learning, each object is represented by a bag composed of multiple instances instead of by a single instance in a traditional learning setting. Previous works in this area only concern multi-instance prediction problems where each bag is associated with a binary (classification) or real-valued (regression) label. However, unsupervised multi-instance learning where bags are without labels has not been studied. In this paper, the problem of unsupervised multi-instance learning is addressed where a multi-instance clustering algorithm named Bamic is proposed. Briefly, by regarding bags as atomic data items and using some form of distance metric to measure distances between bags, Bamic adapts the popular k -Medoids algorithm to partition the unlabeled training bags into k disjoint groups of bags. Furthermore, based on the clustering results, a novel multi-instance prediction algorithm named Bartmip is developed. Firstly, each bag is re-represented by a k-dimensional feature vector, where the value of the i-th feature is set to be the distance between the bag and the medoid of the i-th group. After that, bags are transformed into feature vectors so that common supervised learners are used to learn from the transformed feature vectors each associated with the original bag’s label. Extensive experiments show that Bamic could effectively discover the underlying structure of the data set and Bartmip works quite well on various kinds of multi-instance prediction problems.  相似文献   

18.
甘睿  印鉴 《计算机科学》2012,39(7):144-147
在多示例学习问题中,训练数据集里面的每一个带标记的样本都是由多个示例组成的包,其最终目的是利用这一数据集去训练一个分类器,使得可以利用该分类器去预测还没有被标记的包。在以往的关于多示例学习问题的研究中,有的是通过修改现有的单示例学习算法来迎合多示例的需要,有的则是通过提出新的方法来挖掘示例与包之间的关系并利用挖掘的结果来解决问题。以改变包的表现形式为出发点,提出了一个解决多示例学习问题的算法——概念评估算法。该算法首先利用聚类算法将所有示例聚成d簇,每一个簇可以看作是包含在示例中的概念;然后利用原本用于文本检索的TF-IDF(Term Frequency-Inverse Document Frequency)算法来评估出每一个概念在每个包中的重要性;最后将包表示成一个d维向量——概念评估向量,其第i个位置表示第i个簇所代表的概念在某个包中的重要程度。经重新表示后,原有的多示例数据集已不再是"多示例",以至于一些现有的单示例学习算法能够用来高效地解决多示例学习问题。  相似文献   

19.
Multi-Instance Learning Based Web Mining   总被引:7,自引:0,他引:7  
In multi-instance learning, the training set comprises labeled bags that are composed of unlabeled instances, and the task is to predict the labels of unseen bags. In this paper, a web mining problem, i.e. web index recommendation, is investigated from a multi-instance view. In detail, each web index page is regarded as a bag, while each of its linked pages is regarded as an instance. A user favoring an index page means that he or she is interested in at least one page linked by the index. Based on the browsing history of the user, recommendation could be provided for unseen index pages. An algorithm named Fretcit-kNN, which employs the Minimal Hausdorff distance between frequent term sets and utilizes both the references and citers of an unseen bag in determining its label, is proposed to solve the problem. Experiments show that in average the recommendation accuracy of Fretcit-kNN is 81.0% with 71.7% recall and 70.9% precision, which is significantly better than the best algorithm that does not consider the specific characteristics of multi-instance learning, whose performance is 76.3% accuracy with 63.4% recall and 66.1% precision.  相似文献   

20.
Multiple instance learning (MIL) is concerned with learning from sets (bags) of objects (instances), where the individual instance labels are ambiguous. In this setting, supervised learning cannot be applied directly. Often, specialized MIL methods learn by making additional assumptions about the relationship of the bag labels and instance labels. Such assumptions may fit a particular dataset, but do not generalize to the whole range of MIL problems. Other MIL methods shift the focus of assumptions from the labels to the overall (dis)similarity of bags, and therefore learn from bags directly. We propose to represent each bag by a vector of its dissimilarities to other bags in the training set, and treat these dissimilarities as a feature representation. We show several alternatives to define a dissimilarity between bags and discuss which definitions are more suitable for particular MIL problems. The experimental results show that the proposed approach is computationally inexpensive, yet very competitive with state-of-the-art algorithms on a wide range of MIL datasets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号