首页 | 官方网站   微博 | 高级检索  
     


Boosted multi-task learning
Authors:Olivier Chapelle  Pannagadatta Shivaswamy  Srinivas Vadrevu  Kilian Weinberger  Ya Zhang  Belle Tseng
Affiliation:1. Yahoo! Labs, Sunnyvale, CA, USA
2. Department of Computer Science, Cornell University, Ithaca, NY, USA
3. Washington University, Saint Louis, MO, USA
4. Shanghai Jiao Tong University, Shanghai, China
Abstract:In this paper we propose a novel algorithm for multi-task learning with boosted decision trees. We learn several different learning tasks with a joint model, explicitly addressing their commonalities through shared parameters and their differences with task-specific ones. This enables implicit data sharing and regularization. Our algorithm is derived using the relationship between ? 1-regularization and boosting. We evaluate our learning method on web-search ranking data sets from several countries. Here, multi-task learning is particularly helpful as data sets from different countries vary largely in size because of the cost of editorial judgments. Further, the proposed method obtains state-of-the-art results on a publicly available multi-task dataset. Our experiments validate that learning various tasks jointly can lead to significant improvements in performance with surprising reliability.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号