评分 6 · 来源:cs.AI updates on arXiv.org · 发布于 2026-04-20
评分依据:HarmfulSkillBench: 评估有害技能如何武器化agent的安全基准
要点
arXiv:2604.15415v1 Announce Type: cross Abstract: Large language models (LLMs) have evolved into autonomous agents that rely on open skill ecosystems (e.g., ClawHub and Skills.Rest), hosting numerous publicly reusable skills. Existing security research on these ecosystems mainly focuses on vulnerabilities within skills, such as prompt injection. However, there is a critical gap regarding skills that may be misused for harmful actions (e.g., cyber attacks, fraud and scams, privacy violations, and sexual content generation), namely harmful skills. In this paper, we present the first large-scale …
🤖 AI 点评
本文提供了AI领域的重要信息,值得行业从业者关注。