Posts
All the articles I've posted.
- 6.5
- 6.0
[Trending] openai/openai
· 04/21 06:31 采集 - 4.5
Google rolls out Gemini in Chrome in seven new countries
· 04/21 06:31 采集 - 5.5
It's not just one thing — it's another thing
· 04/21 06:31 采集 - 7.5
Changes to GitHub Copilot Individual plans
GitHub Copilot因Agent工作流导致算力成本激增,宣布暂停个人版新注册并收紧使用限额,标志着AI Agent正在重塑SaaS产品的商业模式。
- 6.0
llm-openrouter 0.6
· 04/22 02:45 采集 - 4.8
- 6.0
- 5.5
CEO and CFO suddenly depart AI nuclear power upstart Fermi
· 04/21 00:32 采集 - 6.0
Tech CEOs Think AI Will Let Them Be Everywhere at Once
· 04/21 00:32 采集 - 8.0
- 8.7
Jailbreak Scaling Laws for Large Language Models: Polynomial-Exponential Crossover
arXiv:2603.11331v2 Announce Type: replace Abstract: Adversarial attacks can reliably steer safety-aligned large language models toward unsafe behavior. Empirically, we find that strong adversarial pro
- 8.3
The Reasoning Trap: How Enhancing LLM Reasoning Amplifies Tool Hallucination
arXiv:2510.22977v2 Announce Type: replace Abstract: Enhancing the reasoning capabilities of Large Language Models (LLMs) is a key strategy for building Agents that 「think then act.「 However, recent ob
- 8.3
Security Threat Modeling for Emerging AI-Agent Protocols: A Comparative Analysis of MCP, A2A, Agora, and ANP
arXiv:2602.11327v2 Announce Type: replace-cross Abstract: The rapid development of the AI agent communication protocols, including the Model Context Protocol (MCP), Agent2Agent (A2A), Agora, and Agent
- 8.0
Qwen3.5-Omni Technical Report
arXiv:2604.15804v1 Announce Type: new Abstract: In this work, we present Qwen3.5-Omni, the latest advancement in the Qwen-Omni model family. Representing a significant evolution over its predecessor,
- 8.0
LLMs Corrupt Your Documents When You Delegate
arXiv:2604.15597v1 Announce Type: new Abstract: Large Language Models (LLMs) are poised to disrupt knowledge work, with the emergence of delegated work as a new interaction paradigm (e.g., vibe coding
- 7.7
Towards Understanding, Analyzing, and Optimizing Agentic AI Execution: A CPU-Centric Perspective
arXiv:2511.00739v3 Announce Type: replace-cross Abstract: Agentic AI serving converts monolithic LLM-based inference to autonomous problem-solvers that can plan, call tools, perform reasoning, and ada
- 7.7
Sequential KV Cache Compression via Probabilistic Language Tries: Beyond the Per-Vector Shannon Limit
arXiv:2604.15356v1 Announce Type: new Abstract: Recent work on KV cache quantization, culminating in TurboQuant, has approached the Shannon entropy limit for per-vector compression of transformer key-
- 7.7
Experience Compression Spectrum: Unifying Memory, Skills, and Rules in LLM Agents
arXiv:2604.15877v1 Announce Type: cross Abstract: As LLM agents scale to long-horizon, multi-session deployments, efficiently managing accumulated experience becomes a critical bottleneck. Agent memor