Skip to content
星际流动

Exploring Knowledge Conflicts for Faithful LLM Reasoning: Benchmark and Method

发布
采集
学术前沿 5.1 分 — 中等质量:常规学术论文,有适度参考价值
原文: cs.AI updates on arXiv.org

评分 5.1 · 来源:cs.AI updates on arXiv.org · 发布于 2026-04-14

评分依据:中等质量:常规学术论文,有适度参考价值

Exploring Knowledge Conflicts for Faithful LLM Reasoning: Benchmark and Method

arXiv:2604.11209v1 Announce Type: cross Abstract: Large language models (LLMs) have achieved remarkable success across a wide range of applications especially when augmented by external knowledge through retrieval-augmented generation (RAG). Despite their widespread adoption, recent studies have shown that LLMs often struggle to perform faithful reasoning when conflicting knowledge is retrieved. However, existing work primarily focuses on conflicts between external knowledge and the parametric…