报告地点:行健楼学术活动室526
邀请人:孙海琳教授
报告摘要:Reinforcement learning from human feedback (RLHF) is an essential technique for ensuring that large language models (LLMs) are aligned with human values and preferences during the post-training phase. As an effective RLHF approach, group relative policy optimization (GRPO) has demonstrated success in many LLM-based applications. However, efficient GRPO-based RLHF training remains a challenge. Recent studies reveal that a higher reward variance of the initial policy model leads to faster RLHF training. Inspired by this finding, we propose a practical reward adjustment model to accelerate RLHF training by provably increasing the reward variance and preserving the relative preferences and reward expectation. Our reward adjustment method inherently poses a nonconvex optimization problem, which is NP-hard to solve in general. To overcome the computational challenges, we design a novel $O(n \log n)$ algorithm to find a global solution of the nonconvex reward adjustment model by explicitly characterizing the extreme points of the feasible set. As an important application, we naturally integrate this reward adjustment model into the GRPO algorithm, leading to a more efficient GRPO with reward variance increase (GRPOVI) algorithm for RLHF training. As an interesting byproduct, we provide an indirect explanation for the empirical effectiveness of GRPO with rule-based reward for RLHF training, as demonstrated in DeepSeek-R1. Experiment results demonstrate that the GRPOVI algorithm can significantly improve the RLHF training efficiency compared to the original GRPO algorithm. This is a joint work with Zonglin Yang, Zhexuan Gu, and Houduo Qi.
个人简介:袁雁城博士现任香港理工大学应用数学系助理教授,香港理工大学-中信集团人工智能数智创新联合实验室副主任,香港理工大学智能运筹学研究中心助理主任。他的主要研究方向为连续优化,人工智能的数学基础及其在大模型、推荐系统、医疗健康等领域的应用。他的研究成果被接收发表于 《SIAM Journal on Optimization》、《Mathematical Programming Computation》、《Journal of the American Statistical Association》、《Journal of Machine Learning Research》、《IEEE Transactions on Pattern Analysis and Machine Intelligence》等权威学术期刊及 NeurIPS, ICML, ICLR, ACM WWW, ACM SIGIR等人工智能领域重要学术会议。他的研究成果入选了人工智能领域重要学术会议的 Best Paper Award Finalists (ACM WWW 2021, ACM SIGIR 2024).