Skip to content
星际流动

The Reasoning Trap: How Enhancing LLM Reasoning Amplifies Tool Hallucination

发布
采集
学术前沿 8.0 分 — The Reasoning Trap: 增强推理能力反而加剧工具幻觉的悖论,引入SimpleToolHalluBench
原文: cs.LG updates on arXiv.org

评分 8 · 来源:cs.LG updates on arXiv.org · 发布于 2026-04-20

评分依据:The Reasoning Trap: 增强推理能力反而加剧工具幻觉的悖论,引入SimpleToolHalluBench

要点

arXiv:2510.22977v2 Announce Type: replace Abstract: Enhancing the reasoning capabilities of Large Language Models (LLMs) is a key strategy for building Agents that “think then act.” However, recent observations, like OpenAI’s o3, suggest a paradox: stronger reasoning often coincides with increased hallucination, yet no prior work has systematically examined whether reasoning enhancement itself causes tool hallucination. To address this gap, we pose the central question: Does strengthening reasoning increase tool hallucination? To answer this, we introduce SimpleToolHalluBench, a diagnostic ben…

🤖 AI 点评

本文提供了AI领域的重要信息,值得行业从业者关注。


标签: