评分 5.5 · 来源:cs.CL updates on arXiv.org · 发布于 2026-04-17
评分依据:Addresses faithfulness gap between LLM explanations and actual reasoning, important for explainability
arXiv:2604.14325v1 Announce Type: new Abstract: Large language models (LLMs) achieve strong performance and have revolutionized NLP, but their lack of explainability keeps them treated as black boxes, limiting their use in domains that demand transparency and trust. A promising direction to address this issue is post-hoc text-based explanations, which aim to explain model decisions in natural language.