评分 6 · 来源:cs.CL updates on arXiv.org · 发布于 2026-04-17
评分依据:Novel framing of unlearning as asymmetric two-task learning, theoretical contribution to machine unlearning
arXiv:2604.14808v1 Announce Type: new Abstract: Machine unlearning for large language models (LLMs) aims to remove targeted knowledge while preserving general capability. In this paper, we recast LLM unlearning as an asymmetric two-task problem: retention is the primary objective and forgetting is an auxiliary. From this perspective, we propose a retention-prioritized gradient synthesis framework that decouples task-specific gradient extraction from conflict-aware combination.