首页 | 官方网站   微博 | 高级检索  
     


Adaptive adversarial prototyping network for few-shot prototypical translation
Affiliation:1. Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai, 200237, PR China;2. Department of Computer Science and Engineering, State Key Laboratory of Bioreactor Engineering, East China University of Science and Technology, Shanghai, 200237, PR China;3. Business Intelligence and Visualization Research Center, National Engineering Laboratory for Big Data Distribution and Exchange Technologies, Shanghai, 200436, PR China;4. Shanghai Engineering Research Center of Big Data & Internet Audience, Shanghai, 200072, PR China;5. Innovation College North-Chiang Mai University, 169 Moo3, Nong Kaew, Hang Dong, Chiang Mai 50230 Thailand;6. International College of Digital Innovation, Chiang Mai University, Chiang Mai, 50200, Thailand;1. Faculty of Information Science and Engineering, Ningbo University, Ningbo, China;2. School of Computer Science and Mathematics, Fujian University of Technology, Fuzhou 350118, China
Abstract:Translating multiple real-world source images to a single prototypical image is a challenging problem. Notably, these source images belong to unseen categories that did not exist during model training. We address this problem by proposing an adaptive adversarial prototype network (AAPN) and enhancing existing one-shot classification techniques. To overcome the limitations that traditional works cannot extract samples from novel categories, our method tends to solve the image translation task of unseen categories through a meta-learner. We train the model in an adversarial learning manner and introduce a style encoder to guide the model with an initial target style. The encoded style latent code enhances the performance of the network with conditional target style images. The AAPN outperforms the state-of-the-art methods in one-shot classification of brand logo dataset and achieves the competitive accuracy in the traffic sign dataset. Additionally, our model improves the visual quality of the reconstructed prototypes in unseen categories. Based on the qualitative and quantitative analysis, the effectiveness of our model for few-shot classification and generation is demonstrated.
Keywords:Prototyping network  Few-shot image translation  Meta-learning  Generative adversarial network
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号