Skip to content
星际流动

FAITH: Factuality Alignment through Integrating Trustworthiness and Honestness

发布
采集
学术前沿 5.0 分 — 中等质量:常规学术论文,有适度参考价值
原文: cs.CL updates on arXiv.org

评分 5.0 · 来源:cs.CL updates on arXiv.org · 发布于 2026-04-14

评分依据:中等质量:常规学术论文,有适度参考价值

FAITH: Factuality Alignment through Integrating Trustworthiness and Honestness

arXiv:2604.10189v1 Announce Type: new Abstract: Large Language Models (LLMs) can generate factually inaccurate content even if they have corresponding knowledge, which critically undermines their reliability. Existing approaches attempt to mitigate this by incorporating uncertainty in QA prompt during training, but these numerical scores lack the semantic richness for LLM to properly understand its internal states of trustworthiness and honestness, leading to insufficient factuality alignment.…