Skip to content
星际流动

Generating Effective CoT Traces for Mitigating Causal Hallucination

发布
采集
学术前沿 3.0 分 — Moderate AI relevance +novelty(1) +practical(1)
原文: cs.CL updates on arXiv.org

评分 3.0 · 来源:cs.CL updates on arXiv.org · 发布于 2026-04-15

评分依据:Moderate AI relevance +novelty(1) +practical(1)

arXiv:2604.12748v1 Announce Type: new Abstract: Although large language models (LLMs) excel in complex reasoning tasks, they suffer from severe causal hallucination in event causality identification (ECI), particularly in smaller models ($\leq$1.5B parameters). A promising approach to address this issue is to fine-tune them with Chain-of-Thought (CoT) traces. However, there is currently a lack of CoT trace dataset available for ECI. In this paper, we first investigate the essential criteria…