Skip to content
星际流动

Mechanistic Circuit-Based Knowledge Editing in Large Language Models

发布
采集
学术前沿 6.0 分 — 有一定参考价值的AI研究论文
原文: cs.CL updates on arXiv.org

评分 6.0 · 来源:cs.CL updates on arXiv.org · 发布于 2026-04-08

评分依据:有一定参考价值的AI研究论文

arXiv:2604.05876v1 Announce Type: new Abstract: Deploying Large Language Models (LLMs) in real-world dynamic environments raises the challenge of updating their pre-trained knowledge. While existing knowledge editing methods can reliably patch isolated facts, they frequently suffer from a “Reasoning Gap”, where the model recalls the edited fact but fails to utilize it in multi-step reasoning chains. To bridge this gap, we introduce MCircKE (\underline{M}echanistic \underline{Circ}uit-based \underline{K}nowledge \underline{E}diting), a novel framework that enables a precise “map-and-adapt” edit


标签: