评分 6 · 来源:cs.LG updates on arXiv.org · 发布于 2026-04-17
评分依据:Addresses credit assignment problem in search agent RL training, contribution weighting improves outcome supervision
arXiv:2604.14267v1 Announce Type: new Abstract: Search agents extend Large Language Models (LLMs) beyond static parametric knowledge by enabling access to up-to-date and long-tail information unavailable during pretraining. While reinforcement learning has been widely adopted for training such agents, existing approaches face key limitations: process supervision often suffers from unstable value estimation, whereas outcome supervision struggles with credit assignment due to sparse, trajectory-level rewards.