首页 | 官方网站   微博 | 高级检索  
     

基于大间隔方法的汉语组块分析
引用本文:周俊生,戴新宇,陈家骏,曲维光.基于大间隔方法的汉语组块分析[J].软件学报,2009,20(4):870-877.
作者姓名:周俊生  戴新宇  陈家骏  曲维光
作者单位:1. 南京大学,计算机软件新技术国家重点实验室,江苏,南京,210093;南京师范大学,计算机科学系,江苏,南京,210097
2. 南京大学,计算机软件新技术国家重点实验室,江苏,南京,210093
3. 南京师范大学,计算机科学系,江苏,南京,210097
基金项目:Supported by the National Natural Science Foundation of China under Grant Nos.60673043, 60773173 (国家自然科学基金); theNational High-Tech Research and Development Plan of China under Grant No.2006AA01Z143 (国家高技术研究发展计划(863)); the Natural Science Foundation of Jiangsu Province of China under Grant No.BK2006117 (江苏省自然科学基金); the Natural Science Foundation of Jiangsu Higher Education Institutions of China under Grant No.07KJB520057 (江苏省高校自然科学基金)
摘    要:汉语组块分析是中文信息处理领域中一项重要的子任务.在一种新的结构化SVMs(support vectormachines)模型的基础上,提出一种基于大间隔方法的汉语组块分析方法.首先,针对汉语组块分析问题设计了序列化标注模型;然后根据大间隔思想给出判别式的序列化标注函数的优化目标,并应用割平面算法实现对特征参数的近似优化训练.针对组块识别问题设计了一种改进的F1 损失函数,使得F1损失值能够依据每个句子的实际长度进行相应的调整,从而能够引入更有效的约束不等式.通过在滨州中文树库CTB4 数据集上的实验数据显示,基于改进的F1 损失函数所产生的识别结果优于Hamming 损失函数,各种类型组块识别的总的F1 值为91.61%,优于CRFs(conditional random fields)和SVMs 方法.

关 键 词:汉语组块分析  大间隔  判别式学习  损失函数
收稿时间:2007/3/13 0:00:00
修稿时间:2007/11/5 0:00:00

Chinese Chunking with Large Margin Method
ZHOU Jun-Sheng,DAI Xin-Yu,CHEN Jia-Jun and QU Wei-Guang.Chinese Chunking with Large Margin Method[J].Journal of Software,2009,20(4):870-877.
Authors:ZHOU Jun-Sheng  DAI Xin-Yu  CHEN Jia-Jun and QU Wei-Guang
Affiliation:State Key Laboratory for Novel Software Technology;Nanjing University;Nanjing 210093;China;Department of Computer Science;Nanjing Normal University;Nanjing 210097;China
Abstract:Chinese chunking plays an important role in natural language processing. This paper presents a large margin method for Chinese chunking based on structural SVMs (support vector machines). First, a sequence labeling model and the formulation of the learning problem are introduced for Chinese chunking problem, and then the cutting plane algorithm is applied to efficiently approximate the optimal solution of the optimization problem.Finally, an improved F1 loss function is proposed to tackle Chinese chunking. The loss function can scale the F1loss value to the length of the sentence to adjust the margin accordingly, leading to more effective constraintinequalities. Experiments are conducted on UPENN Chinese Treebank-4 (CTB4), and the hamming loss function is compared with the improved F1 loss function. The experimental results show that the training algorithm with the improved F1 loss function can achieve higher performance than the Hamming loss function. The overall F1 score of Chinese chunking obtained with this approach is 91.61%, which is higher than the performance produced by the state-of-the-art machine learning models, such as CRFs (conditional random fields) and SVMs models.
Keywords:Chinese chunking  large margin  discriminative learning  loss function
本文献已被 CNKI 维普 万方数据 等数据库收录!
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号