首页 | 官方网站   微博 | 高级检索  
     

基于多任务学习的古诗和对联自动生成
引用本文:卫万成,黄文明,王晶,邓珍荣.基于多任务学习的古诗和对联自动生成[J].中文信息学报,2019,33(11):115-124.
作者姓名:卫万成  黄文明  王晶  邓珍荣
作者单位:1.桂林电子科技大学 计算机科学与技术学院,广西 桂林 541004;
2.广西高效云计算与复杂系统重点实验室,广西 桂林 541004
基金项目:广西高校云计算与复杂系统重点实验室资助项目(yf17106);广西自然科学基金(2018GXNSFAA138132);桂林电子科技大学研究生教育创新计划资助项目(2018YJCX55)
摘    要:实现古诗和对联的自动生成是极具挑战性的任务。该文提出了一种新颖的多任务学习模型用于古诗和对联的自动生成。模型采用编码-解码结构并融入注意力机制,编码部分由两个BiLSTM组成,一个BiLSTM用于关键词输入,另一个BiLSTM用于古诗和对联输入;解码部分由两个LSTM组成,一个LSTM用于古诗的解码输出,另一个LSTM用于对联的解码输出。在中国的传统文学中,古诗和对联具有很多的相似特征,多任务学习模型通过编码器参数共享,解码器参数不共享,让模型底层编码部分兼容古诗和对联特征,解码部分保留各自特征,增强模型泛化能力,表现效果大大优于单任务模型。同时,该文在模型中创新性地引入关键词信息,让生成的古诗及对联表达内容与用户意图一致。最后,该文采用自动评估和人工评估两种方式验证了方法的有效性。

关 键 词:LSTM  多任务学习  注意力机制  古诗对联生成  

Chinese Classical Poetry and Couplet Generation Based on Multi-task Learning
WEI Wancheng,HUANG Wenming,WANG Jing,DENG Zhenrong.Chinese Classical Poetry and Couplet Generation Based on Multi-task Learning[J].Journal of Chinese Information Processing,2019,33(11):115-124.
Authors:WEI Wancheng  HUANG Wenming  WANG Jing  DENG Zhenrong
Affiliation:1.College of Computer and Information Security, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China;
2.Guangxi Key Laboratory of Efficient Cloud Computing and Complex Systems, Guilin, Guangxi 541004, China
Abstract:This paper proposes a novel multi-task learning model for the automatic generation of classical poetry and couplet, which uses an encoder-decoder structure and the attention mechanism. The encoder consists of two BiLSTMs, one for keyword input, the other for classical poetry and couplet input. The decoder consists of two LSTMs, one for classical poetry output, the other for couplet output. In the multi-task learning model, the encoder parameters are shared and the decoder parameters are not shared. The encoder of model can learn the common features of classical poetry and couplet, the decoder of classical model can learn the unique features of classical poetry and couplet. So, the generalization ability of the model will be enhanced, and the performance will be much better than the single task model. At the same time, this paper innovatively introduces keyword information in the model, so that the generated classical poetry and couplet are consistent with the user's intention. At the end of this paper, automa-tic evaluation and manual evaluation are used to verify the effectiveness of the method.
Keywords:LSTM  multi-task learning  attention mechanism  classical poetry and couplet generation  
点击此处可从《中文信息学报》浏览原始摘要信息
点击此处可从《中文信息学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号