评分 7.5 · 来源:cs.LG updates on arXiv.org · 发布于 2026-04-17
评分依据:Conceptual framework for persistent self-modifying agents, -layer taxonomy (pretraining→alignment→narrative→memory→weights) is insightful and timely
arXiv:2604.14717v1 Announce Type: cross Abstract: Persistent language-model agents increasingly combine tool use, tiered memory, reflective prompting, and runtime adaptation. In such systems, behavior is shaped not only by current prompts but by mutable internal conditions that influence future action. This paper introduces layered mutability, a framework for reasoning about that process across five layers: pretraining, post-training alignment, self-narrative, memory, and weight-level adaptation.