Skip to content
星际流动

Modeling LLM Unlearning as an Asymmetric Two-Task Learning Problem

发布
采集
学术前沿 6.0 分 — Novel framing of unlearning as asymmetric two-task learning, theoretical contribution to machine unlearning
原文: cs.CL updates on arXiv.org

评分 6 · 来源:cs.CL updates on arXiv.org · 发布于 2026-04-17

评分依据:Novel framing of unlearning as asymmetric two-task learning, theoretical contribution to machine unlearning

arXiv:2604.14808v1 Announce Type: new Abstract: Machine unlearning for large language models (LLMs) aims to remove targeted knowledge while preserving general capability. In this paper, we recast LLM unlearning as an asymmetric two-task problem: retention is the primary objective and forgetting is an auxiliary. From this perspective, we propose a retention-prioritized gradient synthesis framework that decouples task-specific gradient extraction from conflict-aware combination.