Skip to content
星际流动

Stop Fixating on Prompts: Reasoning Hijacking and Constraint Tightening for Red-Teaming LLM Agents

发布
采集
学术前沿 6.7 分 — LLM Agent红队测试新方法
原文: cs.CL updates on arXiv.org

评分 6.7 · 来源:cs.CL updates on arXiv.org · 发布于 2026-04-08

评分依据:LLM Agent红队测试新方法

arXiv:2604.05549v1 Announce Type: new Abstract: With the widespread application of LLM-based agents across various domains, their complexity has introduced new security threats. Existing red-team methods mostly rely on modifying user prompts, which lack adaptability to new data and may impact the agent’s performance. To address the challenge, this paper proposes the JailAgent framework, which completely avoids modifying the user prompt. Specifically, it implicitly manipulates the agent’s reasoning trajectory and memory retrieval with three key stages: Trigger Extraction, Reasoning Hijacking, a


标签: