Skip to content
星际流动

Your LLM Agents are Temporally Blind: The Misalignment Between Tool Use Decisions and Human Time Perception

发布
采集
学术前沿 7.0 分 — Identifies 'temporal blindness' in LLM agents - critical overlooked failure mode for production agents, highly actionable insight
原文: cs.CL updates on arXiv.org

评分 7 · 来源:cs.CL updates on arXiv.org · 发布于 2026-04-17

评分依据:Identifies ‘temporal blindness’ in LLM agents - critical overlooked failure mode for production agents, highly actionable insight

arXiv:2510.23853v3 Announce Type: replace Abstract: Large language model (LLM) agents are increasingly used to interact with and execute tasks in dynamic environments. However, a critical yet overlooked limitation of these agents is that they, by default, assume a stationary context, failing to account for the real-world time elapsed between messages. We refer to this as “temporal blindness”. This limitation hinders decisions about when to invoke tools, leading agents to either over-rely on stale context and skip needed tool calls, or under-rely on it and redundantly repeat tool calls.