Skip to content
星际流动

SCOPE: Signal-Calibrated On-Policy Distillation Enhancement with Dual-Path Adaptive Weighting

发布
采集
学术前沿 5.9 分 — 中等偏上:有一定信息增量和参考价值
原文: cs.AI updates on arXiv.org

评分 5.9 · 来源:cs.AI updates on arXiv.org · 发布于 2026-04-14

评分依据:中等偏上:有一定信息增量和参考价值

SCOPE: Signal-Calibrated On-Policy Distillation Enhancement with Dual-Path Adaptive Weighting

arXiv:2604.10688v1 Announce Type: cross Abstract: On-policy reinforcement learning has become the dominant paradigm for reasoning alignment in large language models, yet its sparse, outcome-level rewards make token-level credit assignment notoriously difficult. On-Policy Distillation (OPD) alleviates this by introducing dense, token-level KL supervision from a teacher model, but typically applies this supervision uniformly across all rollouts, ignoring fundamental differences in signal quality.…