Skip to content
星际流动

IF-RewardBench: Benchmarking Judge Models for Instruction-Following Evaluation

发布
采集
学术前沿 5.5 分 — Fills gap in judge model evaluation for instruction-following, addresses oversimplified pairwise eval paradigms
原文: cs.CL updates on arXiv.org

评分 5.5 · 来源:cs.CL updates on arXiv.org · 发布于 2026-04-17

评分依据:Fills gap in judge model evaluation for instruction-following, addresses oversimplified pairwise eval paradigms

arXiv:2603.04738v2 Announce Type: replace Abstract: Instruction-following is a foundational capability of large language models (LLMs), with its improvement hinging on scalable and accurate feedback from judge models. However, the reliability of current judge models in instruction-following remains underexplored due to several deficiencies of existing meta-evaluation benchmarks, such as their insufficient data coverage and oversimplified pairwise evaluation paradigms that misalign with model optimization scenarios.