Skip to content
星际流动

CURaTE: Continual Unlearning in Real Time with Ensured Preservation of LLM Knowledge

发布
采集
学术前沿 6.5 分 — Addresses critical gap in unlearning: continual real-time unlearning without knowledge degradation, timely safety topic
原文: cs.LG updates on arXiv.org

评分 6.5 · 来源:cs.LG updates on arXiv.org · 发布于 2026-04-17

评分依据:Addresses critical gap in unlearning: continual real-time unlearning without knowledge degradation, timely safety topic

arXiv:2604.14644v1 Announce Type: cross Abstract: The inability to filter out in advance all potentially problematic data from the pre-training of large language models has given rise to the need for methods for unlearning specific pieces of knowledge after training. Existing techniques overlook the need for continuous and immediate action, causing them to suffer from degraded utility as updates accumulate and protracted exposure of sensitive information. To address these issues, we propose Continual Unlearning in Real Time with Ensured Preservation of LLM Knowledge (CURaTE).