首页 | 官方网站   微博 | 高级检索  
     

跨模态表征与生成技术
引用本文:刘华峰,陈静静,李亮,鲍秉坤,李泽超,刘家瑛,聂礼强.跨模态表征与生成技术[J].中国图象图形学报,2023,28(6):1608-1629.
作者姓名:刘华峰  陈静静  李亮  鲍秉坤  李泽超  刘家瑛  聂礼强
作者单位:南京理工大学计算机科学与工程学院, 南京 210094;复旦大学计算机科学技术学院, 上海 200438;中国科学院计算技术研究所, 北京 100190;南京邮电大学通信与信息工程学院, 南京 230001;北京大学王选计算机研究所, 北京 100871;哈尔滨工业大学 (深圳) 计算机科学与技术学院, 深圳 518055
基金项目:江苏省自然科学基金项目(BK20220936);中国博士后科学基金项目(2022M721626)
摘    要:多媒体数据持续呈现爆发式增长并显现出异源异构的特性,因此跨模态学习领域研究逐渐引起学术和工业界的关注。跨模态表征与生成是跨模态学习的两大核心基础问题。跨模态表征旨在利用多种模态之间的互补性剔除模态之间的冗余,从而获得更为有效的特征表示;跨模态生成则是基于模态之间的语义一致性,实现不同模态数据形式上的相互转换,有助于提高不同模态间的迁移能力。本文系统地分析了国际与国内近年来跨模态表征与生成领域的重要研究进展,包括传统跨模态表征学习、多模态大模型表示学习、图像到文本的跨模态转换和跨模态图像生成。其中,传统跨模态表征学习探讨了跨模态统一表征和跨模态协同表征,多模态大模型表示学习探讨了基于Transformer的模型研究,图像到文本的跨模态转换探讨了图像视频的语义描述、视频字幕语义分析和视觉问答等领域的发展,跨模态图像生成从不同模态信息的跨模态联合表示方法、图像的跨模态生成技术和基于预训练的特定域图像生成阐述了跨模态生成方面的进展。本文详细综述了上述各个子领域研究的挑战性,对比了国内外研究方面的进展情况,梳理了发展脉络和学术研究的前沿动态。最后,根据上述分析展望了跨模态表征与生成的发展趋势和突破口。

关 键 词:多媒体技术  跨模态学习  大模型  跨模态表征  跨模态生成  深度学习
收稿时间:2023/1/15 0:00:00
修稿时间:2023/2/17 0:00:00

Cross-modal representation learning and generation
Liu Huafeng,Chen Jingjing,Li Liang,Bao Bingkun,Li Zechao,Liu Jiaying,Nie Liqiang.Cross-modal representation learning and generation[J].Journal of Image and Graphics,2023,28(6):1608-1629.
Authors:Liu Huafeng  Chen Jingjing  Li Liang  Bao Bingkun  Li Zechao  Liu Jiaying  Nie Liqiang
Affiliation:School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China;School of Computer Science, Fudan University, Shanghai 200438, China;Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China;College of Telecommunication and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 230001, China;Wangxuan Institute of Computer Technology, Peking University, Beijing 100871, China; School of Computer Science of Technology, Harbin Institute of Technology(Shenzhen), Shenzhen 518055, China
Abstract:Nowadays, with the booming of multimedia data, the character of multi-source and multi-modality of data has become a challenging problem in multimedia research. Its representation and generation can be as two key factors in cross-modal learning research. Cross-modal representation studies feature learning and information integration methods using multi-modal data. To get more effective feature representation, multimodality-between mutual benefits are required to be strengthened. Cross-modal generation is focused on the knowledge transfer mechanism across modalities. The modals-between semantic consistency can be used to realize data-interchangeable profiles of different modals. It is beneficial to improve modalities-between migrating ability. The literature review in cross-modal representation and generation are critically analyzed on the aspect of 1) traditional cross-modal representation learning, 2) big model for cross-modal representation learning, 3) image-to-text cross-modal conversion, joint representation, and 4) cross-modal image generation. Traditional cross-modal representation has two categories:joint representation and coordinated representation. Joint representation can yield multiple single-modal information to the joint representation space when each of single-modal information is processed through the coordinated representations, and cross-modal representations can be learnt mutually in terms of similarity constraints. Deep neural networks(DNNs) based self-supervised learning ability are activated to deal with largescale unlabeled data, especially for the Transformer-based methods. To enrich the supervised learning paradigm, the pretrained large models can yield large-scale unlabeled data to learn training, and a downstream tasks-derived small amount of labeled data is used for model fine-tuning. The pre-trained model has better versatility and transfering ability compared to the trained model for specific tasks, and the fine-tuned model can be used to optimize downstream tasks as well. The developmentof cross-modal synthesis(a.k.a. image caption or video caption) methods have been summarized, including end-toend, semantic-based, and stylize-based methods. In addition, current situation of cross-modal conversion between image and text has beenanalyzed, including image caption, video caption, and visual question answering. The cross-modal generation methods are summarized as well in relevance to the joint representation of cross-modal information, image generation, text-image cross-modal generation, and cross-modal generation based on pre-trained models. In recent years, generative adversarial networks(GANs) and denoising diffusion probabilistic models(DDPMs) have been faciliating in crossmodal generation tasks. Thanks to the strong adaptability and generation ability of DDPM models, cross-modal generation research can be developed and the constraints of vulnerable textures are optimized to a certain extent. The growth of GAN-based and DDPM-based methods are summarized and analyzed further.
Keywords:multimedia technology  cross-modal learning  foundation model  cross-modal representation  cross-modal generation  deep learning
点击此处可从《中国图象图形学报》浏览原始摘要信息
点击此处可从《中国图象图形学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号