首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
深度学习技术的广泛应用有力推动了医学图像分析领域的发展,然而大多数深度学习方法通常假设训练集和测试集是独立同分布的,这个假设在模型临床部署时很难保证实现,因此常出现模型性能下降、场景泛化能力不强的困境。基于深度学习的域自适应技术是提升模型迁移能力的主流方法,其目的是使在一个数据集上训练的模型,能够在另一个没有或只有少量标签的数据集上也获得较好结果。由于医学图像存在着样本获取和标注困难、图像性质特殊、模态差异等情况,这给域自适应技术带来很多现实挑战。首先将介绍域自适应的定义及面临的主要挑战,进而从技术角度分类总结了近年来的相关算法,并对比分析其优缺点;然后详细介绍了域自适应常用的医学图像数据集以及相关算法结果情况;最后,从发展瓶颈、技术手段、交叉领域等方面,展望了面向医学图像分析的域自适应的未来研究方向。  相似文献   

2.
深度域适应综述: 一般情况与复杂情况   总被引:7,自引:3,他引:4  
信息时代产生的大量数据使机器学习技术成功地应用于许多领域. 大多数机器学习技术需要满足训练集与测试集独立同分布的假设, 但在实际应用中这个假设很难满足. 域适应是一种在训练集和测试集不满足独立同分布条件下的机器学习技术. 一般情况下的域适应只适用于源域目标域特征空间与标签空间都相同的情况, 然而实际上这个条件很难满足. 为了增强域适应技术的适用性, 复杂情况下的域适应逐渐成为研究热点, 其中标签空间不一致和复杂目标域情况下的域适应技术是近年来的新兴方向. 随着深度学习技术的崛起, 深度域适应已经成为域适应研究领域中的主流方法. 本文对一般情况与复杂情况下的深度域适应的研究进展进行综述, 对其缺点进行总结, 并对其未来的发展趋势进行预测. 首先对迁移学习相关概念进行介绍, 然后分别对一般情况与复杂情况下的域适应、域适应技术的应用以及域适应方法性能的实验结果进行综述, 最后对域适应领域的未来发展趋势进行展望并对全文内容进行总结.  相似文献   

3.
近年来深度学习在图像分类任务上取得了显著效果,但通常要求大量人工标记数据,模型训练成本很高.因此,领域自适应等小样本学习方法成为当前研究热点.通常,域适应方法利用源域的经验知识也仅能一定程度降低对目标域标记数据的依赖,因此可以引入主动学习方法对样本价值进行评估并做筛选,从而进一步降低标记成本.本文将典型样本价值估计模型引入域适应学习,结合特征迁移思路,提出了双主动域适应学习算法D_Ac T(Dual active domain adaptation).该算法同时对源域与目标域数据进行价值度量,并挑选最具训练价值的样本,在保证模型精度的前提下,大幅度减少了模型对标签数据的需求.具体而言,首先利用极大极小熵和核心集采样方法,用主动学习价值评估模型挑选目标域样本,得到单主动域适应算法S_Ac T (Single active domain adaptation).随后利用损失预测策略,将价值评估策略适配至源域,进一步提升迁移学习知识复用有效性,降低模型训练成本.本文在常用的四个图像迁移数据集进行了测试,将所提两个算法和传统主动迁移学习及半监督迁移学习算法进行了实验对比.结果表明双主动域适应方...  相似文献   

4.
深度学习的成功依赖于海量的训练数据,然而获取大规模有标注的数据并不容易,成本昂贵且耗时;同时由于数据在不同场景下的分布有所不同,利用某一特定场景的数据集所训练出的模型往往在其他场景表现不佳。迁移学习作为一种将知识从一个领域转移到另一个领域的方法,可以解决上述问题。深度迁移学习则是在深度学习框架下实现迁移学习的方法。提出一种基于伪标签的深度迁移学习算法,该算法以ResNet-50为骨干,通过一种兼顾置信度和类别平衡的样本筛选机制为目标域样本提供伪标签,然后进行自训练,最终实现对目标域样本准确分类,在Office-31数据集上的三组迁移学习任务中,平均准确率较传统算法提升5.0%。该算法没有引入任何额外网络参数,且注重源域数据隐私,可移植性强,具有一定的实用价值。  相似文献   

5.
深度决策树迁移学习Boosting方法(DTrBoost)可以有效地实现单源域有监督情况下向一个目标域迁移学习,但无法实现多个源域情况下的无监督迁移场景。针对这一问题,提出了多源域分布下优化权重的无监督迁移学习Boosting方法,主要思想是根据不同源域与目标域分布情况计算出对应的KL值,通过比较选择合适数量的不同源域样本训练分类器并对目标域样本打上伪标签。最后,依照各个不同源域的KL距离分配不同的学习权重,将带标签的各个源域样本与带伪标签的目标域进行集成训练得到最终结果。对比实验表明,提出的算法实现了更好的分类精度并对不同的数据集实现了自适应效果,分类错误率平均下降2.4%,在效果最好的marketing数据集上下降6%以上。  相似文献   

6.
基于相似度学习的多源迁移算法   总被引:1,自引:0,他引:1  
卞则康  王士同 《控制与决策》2017,32(11):1941-1948
针对与测试数据分布相同的训练数据不足,相关领域中存在大量的、与测试数据分布相近的训练数据的场景,提出一种基于相似度学习的多源迁移学习算法(SL-MSTL).该算法在经典SVM分类模型的基础上提出一种新的迁移分类模型,增加对多源域与目标域之间的相似度学习,可以有效地利用各源域中的有用信息,提高目标域的分类效果.实验的结果表明了SL-MSTL 算法的有效性和实用性.  相似文献   

7.
域适应是解决源域样本和目标域样本不满足独立同分布问题的迁移学习范式,是当下研究的重点方法。然而实际情况下获取源域样本的渠道和方法并不唯一,这会导致源域中存在多种不同分布的样本。多源域适应方法是解决源域样本分布多样性问题的有效途径,其主要研究各源域分布间的关系和与目标域分布对齐的策略,进一步减轻各域之间的域偏移,具有实用意义和挑战价值。随着深度学习技术的不断进步,多源域适应方法主要使用深度神经网络提取各域的域不变特征作为分布对齐的依据,结合使用度量准则衡量分布差异或者利用对抗思想对齐域间分布。经过理论证明和实验验证,多源域适应方法训练的模型比单源域方法训练的模型具有更好的泛化性能,更符合现实需求。通过介绍多源域适应的研究现状和相关概念,对现有算法进行总结和综述,按照迁移方式不同对多源域适应方法进行分类,进一步分析多源域适应方法性能的实验结果,阐述其存在的不足和缺点,并对多源域适应领域的发展和趋势进行预测。  相似文献   

8.
在传统的联邦学习中,多个客户端的本地模型由其隐私数据独立训练,中心服务器通过聚合本地模型生成共享的全局模型。然而,由于非独立同分布(Non-IID)数据等统计异质性,一个全局模型往往无法适应每个客户端。为了解决这个问题,本文提出一种针对Non-IID数据的基于AP聚类算法的联邦学习聚合算法(APFL)。在APFL中,服务器会根据客户端的数据特征,计算出每个客户端之间的相似度矩阵,再利用AP聚类算法对客户端划分不同的集群,构建多中心框架,为每个客户端计算出适合的个性化模型权重。将本文算法在FMINST数据集和CIFAR10数据集上进行实验,与传统联邦学习FedAvg相比,APFL在FMNIST数据集上提升了1.88个百分点,在CIFAR10数据集上提升了6.08个百分点。实验结果表明,本文所提出的APFL在Non-IID数据上可以提高联邦学习的精度性能。  相似文献   

9.
张振宇  杨健 《自动化学报》2023,(7):1446-1455
双目深度估计的在线适应是一个有挑战性的问题,其要求模型能够在不断变化的目标场景中在线连续地自我调整并适应于当前环境.为处理该问题,提出一种新的在线元学习适应算法(Online meta-learning model with adaptation,OMLA),其贡献主要体现在两方面:首先引入在线特征对齐方法处理目标域和源域特征的分布偏差,以减少数据域转移的影响;然后利用在线元学习方法调整特征对齐过程和网络权重,使模型实现快速收敛.此外,提出一种新的基于元学习的预训练方法,以获得适用于在线学习场景的深度网络参数.相关实验分析表明, OMLA和元学习预训练算法均能帮助模型快速适应于新场景,在KITTI数据集上的实验对比表明,本文方法的效果超越了当前最佳的在线适应算法,接近甚至优于在目标域离线训练的理想模型.  相似文献   

10.
基于深度模型迁移的细粒度图像分类方法   总被引:1,自引:0,他引:1  
刘尚旺  郜翔 《计算机应用》2018,38(8):2198-2204
针对细粒度图像分类方法中存在模型复杂度较高、难以利用较深模型等问题,提出深度模型迁移(DMT)分类方法。首先,在粗粒度图像数据集上进行深度模型预训练;然后,使用细粒度图像数据集对预训练模型logits层进行不确切监督学习,使其特征分布向新数据集特征分布方向迁移;最后,将迁移模型导出,在对应的测试集上进行测试。实验结果表明,在STANFORD DOGS、CUB-200-2011、OXFORD FLOWER-102细粒度图像数据集上,DMT分类方法的分类准确率分别达到72.23%、73.33%和96.27%,验证了深度模型迁移方法在细粒度图像分类领域的有效性。  相似文献   

11.
Auer  Peter  Long  Philip M.  Maass  Wolfgang  Woeginger  Gerhard J. 《Machine Learning》1995,18(2-3):187-230
The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range {0, 1}. Much less is known about the theory of learning functions with a larger range such as or . In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models.We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any learning model).Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes.Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from d into . Previously, the class of linear functions from d into was the only class of functions with multidimensional domain that was known to be learnable within the rigorous framework of a formal model for online learning.Finally we give a sufficient condition for an arbitrary class of functions from into that allows us to learn the class of all functions that can be written as the pointwise maximum ofk functions from . This allows us to exhibit a number of further nontrivial classes of functions from into for which there exist efficient learning algorithms.  相似文献   

12.
Transfer in variable-reward hierarchical reinforcement learning   总被引:2,自引:1,他引:1  
Transfer learning seeks to leverage previously learned tasks to achieve faster learning in a new task. In this paper, we consider transfer learning in the context of related but distinct Reinforcement Learning (RL) problems. In particular, our RL problems are derived from Semi-Markov Decision Processes (SMDPs) that share the same transition dynamics but have different reward functions that are linear in a set of reward features. We formally define the transfer learning problem in the context of RL as learning an efficient algorithm to solve any SMDP drawn from a fixed distribution after experiencing a finite number of them. Furthermore, we introduce an online algorithm to solve this problem, Variable-Reward Reinforcement Learning (VRRL), that compactly stores the optimal value functions for several SMDPs, and uses them to optimally initialize the value function for a new SMDP. We generalize our method to a hierarchical RL setting where the different SMDPs share the same task hierarchy. Our experimental results in a simplified real-time strategy domain show that significant transfer learning occurs in both flat and hierarchical settings. Transfer is especially effective in the hierarchical setting where the overall value functions are decomposed into subtask value functions which are more widely amenable to transfer across different SMDPs.  相似文献   

13.
This article studies self-directed learning, a variant of the on-line (or incremental) learning model in which the learner selects the presentation order for the instances. Alternatively, one can view this model as a variation of learning with membership queries in which the learner is only charged for membership queries for which it could not predict the outcome. We give tight bounds on the complexity of self-directed learning for the concept classes of monomials, monotone DNF formulas, and axis-parallel rectangles in {0, 1, , n – 1} d . These results demonstrate that the number of mistakes under self-directed learning can be surprisingly small. We then show that learning complexity in the model of self-directed learning is less than that of all other commonly studied on-line and query learning models. Next we explore the relationship between the complexity of self-directed learning and the Vapnik-Chervonenkis (VC-)dimension. We show that, in general, the VC-dimension and the self-directed learning complexity are incomparable. However, for some special cases, we show that the VC-dimension gives a lower bound for the self-directed learning complexity. Finally, we explore a relationship between Mitchell's version space algorithm and the existence of self-directed learning algorithms that make few mistakes.  相似文献   

14.
In this paper we initiate an investigation of generalizations of the Probably Approximately Correct (PAC) learning model that attempt to significantly weaken the target function assumptions. The ultimate goal in this direction is informally termed agnostic learning, in which we make virtually no assumptions on the target function. The name derives from the fact that as designers of learning algorithms, we give up the belief that Nature (as represented by the target function) has a simple or succinct explanation. We give a number of positive and negative results that provide an initial outline of the possibilities for agnostic learning. Our results include hardness results for the most obvious generalization of the PAC model to an agnostic setting, an efficient and general agnostic learning method based on dynamic programming, relationships between loss functions for agnostic learning, and an algorithm for a learning problem that involves hidden variables.  相似文献   

15.
Kearns  Michael  Sebastian Seung  H. 《Machine Learning》1995,18(2-3):255-276
We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.  相似文献   

16.
刘晓  毛宁 《数据采集与处理》2015,30(6):1310-1317
学习自动机(Learning automation,LA)是一种自适应决策器。其通过与一个随机环境不断交互学习从一个允许的动作集里选择最优的动作。在大多数传统的LA模型中,动作集总是被取作有限的。因此,对于连续参数学习问题,需要将动作空间离散化,并且学习的精度取决于离散化的粒度。本文提出一种新的连续动作集学习自动机(Continuous action set learning automaton,CALA),其动作集为一个可变区间,同时按照均匀分布方式选择输出动作。学习算法利用来自环境的二值反馈信号对动作区间的端点进行自适应更新。通过一个多模态学习问题的仿真实验,演示了新算法相对于3种现有CALA算法的优越性。  相似文献   

17.
Massive Open Online Courses (MOOCs) require individual learners to self-regulate their own learning, determining when, how and with what content and activities they engage. However, MOOCs attract a diverse range of learners, from a variety of learning and professional contexts. This study examines how a learner's current role and context influences their ability to self-regulate their learning in a MOOC: Introduction to Data Science offered by Coursera. The study compared the self-reported self-regulated learning behaviour between learners from different contexts and with different roles. Significant differences were identified between learners who were working as data professionals or studying towards a higher education degree and other learners in the MOOC. The study provides an insight into how an individual's context and role may impact their learning behaviour in MOOCs.  相似文献   

18.
Ram  Ashwin 《Machine Learning》1993,10(3):201-248
This article describes how a reasoner can improve its understanding of an incompletely understood domain through the application of what it already knows to novel problems in that domain. Case-based reasoning is the process of using past experiences stored in the reasoner's memory to understand novel situations or solve novel problems. However, this process assumes that past experiences are well understood and provide good lessons to be used for future situations. This assumption is usually false when one is learning about a novel domain, since situations encountered previously in this domain might not have been understood completely. Furthermore, the reasoner may not even have a case that adequately deals with the new situation, or may not be able to access the case using existing indices. We present a theory of incremental learning based on the revision of previously existing case knowledge in response to experiences in such situations. The theory has been implemented in a case-based story understanding program that can (a) learn a new case in situations where no case already exists, (b) learn how to index the case in memory, and (c) incrementally refine its understanding of the case by using it to reason about new situations, thus evolving a better understanding of its domain through experience. This research complements work in case-based reasoning by providing mechanisms by which a case library can be automatically built for use by a case-based reasoning program.  相似文献   

19.
20.
We study a model of probably exactly correct (PExact) learning that can be viewed either as the Exact model (learning from equivalence queries only) relaxed so that counterexamples to equivalence queries are distributionally drawn rather than adversarially chosen or as the probably approximately correct (PAC) model strengthened to require a perfect hypothesis. We also introduce a model of probably almost exactly correct (PAExact) learning that requires a hypothesis with negligible error and thus lies between the PExact and PAC models. Unlike the Exact and PExact models, PAExact learning is applicable to classes of functions defined over infinite instance spaces. We obtain a number of separation results between these models. Of particular note are some positive results for efficient parallel learning in the PAExact model, which stand in stark contrast to earlier negative results for efficient parallel Exact learning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号